00:00:00.001 Started by upstream project "autotest-per-patch" build number 132386 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:06.232 The recommended git tool is: git 00:00:06.233 using credential 00000000-0000-0000-0000-000000000002 00:00:06.234 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.245 Fetching changes from the remote Git repository 00:00:06.247 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.259 Using shallow fetch with depth 1 00:00:06.259 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.259 > git --version # timeout=10 00:00:06.272 > git --version # 'git version 2.39.2' 00:00:06.272 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.145 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.159 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.174 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.174 > git config core.sparsecheckout # timeout=10 00:00:13.186 > git read-tree -mu HEAD # timeout=10 00:00:13.203 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.230 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.231 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.330 [Pipeline] Start of Pipeline 00:00:13.345 [Pipeline] library 00:00:13.347 Loading library shm_lib@master 00:00:13.347 Library shm_lib@master is cached. Copying from home. 00:00:13.363 [Pipeline] node 00:00:13.375 Running on GP1 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:13.377 [Pipeline] { 00:00:13.388 [Pipeline] catchError 00:00:13.391 [Pipeline] { 00:00:13.404 [Pipeline] wrap 00:00:13.412 [Pipeline] { 00:00:13.421 [Pipeline] stage 00:00:13.422 [Pipeline] { (Prologue) 00:00:13.616 [Pipeline] sh 00:00:13.898 + logger -p user.info -t JENKINS-CI 00:00:13.916 [Pipeline] echo 00:00:13.917 Node: GP1 00:00:13.926 [Pipeline] sh 00:00:14.225 [Pipeline] setCustomBuildProperty 00:00:14.238 [Pipeline] echo 00:00:14.241 Cleanup processes 00:00:14.248 [Pipeline] sh 00:00:14.537 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:14.537 2617997 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:14.549 [Pipeline] sh 00:00:14.836 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:14.836 ++ grep -v 'sudo pgrep' 00:00:14.836 ++ awk '{print $1}' 00:00:14.836 + sudo kill -9 00:00:14.836 + true 00:00:14.852 [Pipeline] cleanWs 00:00:14.864 [WS-CLEANUP] Deleting project workspace... 00:00:14.864 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.870 [WS-CLEANUP] done 00:00:14.875 [Pipeline] setCustomBuildProperty 00:00:14.888 [Pipeline] sh 00:00:15.170 + sudo git config --global --replace-all safe.directory '*' 00:00:15.243 [Pipeline] httpRequest 00:00:15.957 [Pipeline] echo 00:00:15.958 Sorcerer 10.211.164.20 is alive 00:00:15.965 [Pipeline] retry 00:00:15.966 [Pipeline] { 00:00:15.975 [Pipeline] httpRequest 00:00:15.978 HttpMethod: GET 00:00:15.979 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.980 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.986 Response Code: HTTP/1.1 200 OK 00:00:15.987 Success: Status code 200 is in the accepted range: 200,404 00:00:15.987 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.714 [Pipeline] } 00:00:29.732 [Pipeline] // retry 00:00:29.741 [Pipeline] sh 00:00:30.028 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.045 [Pipeline] httpRequest 00:00:30.867 [Pipeline] echo 00:00:30.869 Sorcerer 10.211.164.20 is alive 00:00:30.881 [Pipeline] retry 00:00:30.884 [Pipeline] { 00:00:30.900 [Pipeline] httpRequest 00:00:30.905 HttpMethod: GET 00:00:30.905 URL: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:00:30.906 Sending request to url: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:00:30.932 Response Code: HTTP/1.1 200 OK 00:00:30.932 Success: Status code 200 is in the accepted range: 200,404 00:00:30.932 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:03:04.452 [Pipeline] } 00:03:04.470 [Pipeline] // retry 00:03:04.477 [Pipeline] sh 00:03:04.762 + tar --no-same-owner -xf spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:03:08.068 [Pipeline] sh 00:03:08.350 + git -C spdk log --oneline -n5 00:03:08.350 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:03:08.350 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:03:08.350 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:03:08.350 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:03:08.350 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:03:08.361 [Pipeline] } 00:03:08.376 [Pipeline] // stage 00:03:08.386 [Pipeline] stage 00:03:08.388 [Pipeline] { (Prepare) 00:03:08.406 [Pipeline] writeFile 00:03:08.423 [Pipeline] sh 00:03:08.703 + logger -p user.info -t JENKINS-CI 00:03:08.716 [Pipeline] sh 00:03:08.996 + logger -p user.info -t JENKINS-CI 00:03:09.014 [Pipeline] sh 00:03:09.340 + cat autorun-spdk.conf 00:03:09.340 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:09.340 SPDK_TEST_NVMF=1 00:03:09.340 SPDK_TEST_NVME_CLI=1 00:03:09.340 SPDK_TEST_NVMF_NICS=mlx5 00:03:09.340 SPDK_RUN_UBSAN=1 00:03:09.340 NET_TYPE=phy 00:03:09.347 RUN_NIGHTLY=0 00:03:09.352 [Pipeline] readFile 00:03:09.379 [Pipeline] withEnv 00:03:09.381 [Pipeline] { 00:03:09.396 [Pipeline] sh 00:03:09.682 + set -ex 00:03:09.682 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:03:09.682 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:09.682 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:09.682 ++ SPDK_TEST_NVMF=1 00:03:09.682 ++ SPDK_TEST_NVME_CLI=1 00:03:09.682 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:09.682 ++ SPDK_RUN_UBSAN=1 00:03:09.682 ++ NET_TYPE=phy 00:03:09.682 ++ RUN_NIGHTLY=0 00:03:09.682 + case $SPDK_TEST_NVMF_NICS in 00:03:09.682 + DRIVERS=mlx5_ib 00:03:09.682 + [[ -n mlx5_ib ]] 00:03:09.682 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:09.682 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:16.245 rmmod: ERROR: Module irdma is not currently loaded 00:03:16.245 rmmod: ERROR: Module i40iw is not currently loaded 00:03:16.245 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:16.245 + true 00:03:16.245 + for D in $DRIVERS 00:03:16.245 + sudo modprobe mlx5_ib 00:03:16.245 + exit 0 00:03:16.254 [Pipeline] } 00:03:16.268 [Pipeline] // withEnv 00:03:16.272 [Pipeline] } 00:03:16.285 [Pipeline] // stage 00:03:16.295 [Pipeline] catchError 00:03:16.296 [Pipeline] { 00:03:16.309 [Pipeline] timeout 00:03:16.309 Timeout set to expire in 1 hr 0 min 00:03:16.311 [Pipeline] { 00:03:16.325 [Pipeline] stage 00:03:16.326 [Pipeline] { (Tests) 00:03:16.341 [Pipeline] sh 00:03:16.622 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:03:16.622 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:03:16.622 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:03:16.622 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:03:16.622 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.622 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:03:16.622 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:03:16.622 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:16.622 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:03:16.622 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:16.622 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:03:16.622 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:03:16.622 + source /etc/os-release 00:03:16.622 ++ NAME='Fedora Linux' 00:03:16.622 ++ VERSION='39 (Cloud Edition)' 00:03:16.622 ++ ID=fedora 00:03:16.622 ++ VERSION_ID=39 00:03:16.622 ++ VERSION_CODENAME= 00:03:16.622 ++ PLATFORM_ID=platform:f39 00:03:16.622 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:16.622 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:16.622 ++ LOGO=fedora-logo-icon 00:03:16.622 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:16.622 ++ HOME_URL=https://fedoraproject.org/ 00:03:16.622 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:16.623 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:16.623 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:16.623 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:16.623 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:16.623 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:16.623 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:16.623 ++ SUPPORT_END=2024-11-12 00:03:16.623 ++ VARIANT='Cloud Edition' 00:03:16.623 ++ VARIANT_ID=cloud 00:03:16.623 + uname -a 00:03:16.623 Linux spdk-gp-01 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:16.623 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:17.999 Hugepages 00:03:17.999 node hugesize free / total 00:03:17.999 node0 1048576kB 0 / 0 00:03:17.999 node0 2048kB 0 / 0 00:03:17.999 node1 1048576kB 0 / 0 00:03:17.999 node1 2048kB 0 / 0 00:03:17.999 00:03:17.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.999 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:17.999 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:17.999 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:17.999 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:17.999 + rm -f /tmp/spdk-ld-path 00:03:17.999 + source autorun-spdk.conf 00:03:17.999 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:17.999 ++ SPDK_TEST_NVMF=1 00:03:17.999 ++ SPDK_TEST_NVME_CLI=1 00:03:17.999 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:17.999 ++ SPDK_RUN_UBSAN=1 00:03:17.999 ++ NET_TYPE=phy 00:03:17.999 ++ RUN_NIGHTLY=0 00:03:17.999 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:17.999 + [[ -n '' ]] 00:03:17.999 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:17.999 + for M in /var/spdk/build-*-manifest.txt 00:03:17.999 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:17.999 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:17.999 + for M in /var/spdk/build-*-manifest.txt 00:03:17.999 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:17.999 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:17.999 + for M in /var/spdk/build-*-manifest.txt 00:03:17.999 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:17.999 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:17.999 ++ uname 00:03:17.999 + [[ Linux == \L\i\n\u\x ]] 00:03:17.999 + sudo dmesg -T 00:03:17.999 + sudo dmesg --clear 00:03:17.999 + dmesg_pid=2619048 00:03:17.999 + [[ Fedora Linux == FreeBSD ]] 00:03:17.999 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.999 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.999 + sudo dmesg -Tw 00:03:17.999 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:17.999 + [[ -x /usr/src/fio-static/fio ]] 00:03:17.999 + export FIO_BIN=/usr/src/fio-static/fio 00:03:17.999 + FIO_BIN=/usr/src/fio-static/fio 00:03:17.999 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:17.999 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:17.999 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:17.999 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:17.999 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:17.999 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:17.999 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:17.999 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:17.999 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:17.999 12:17:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:18.000 12:17:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:03:18.000 12:17:23 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:03:18.000 12:17:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:18.000 12:17:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:18.258 12:17:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:18.258 12:17:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:18.258 12:17:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:18.258 12:17:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:18.258 12:17:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:18.258 12:17:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:18.258 12:17:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.258 12:17:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.258 12:17:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.258 12:17:23 -- paths/export.sh@5 -- $ export PATH 00:03:18.258 12:17:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.258 12:17:23 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:18.258 12:17:23 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:18.258 12:17:23 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101443.XXXXXX 00:03:18.258 12:17:23 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101443.tnB3iL 00:03:18.258 12:17:23 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:18.258 12:17:23 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:18.258 12:17:23 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:03:18.258 12:17:23 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:18.258 12:17:23 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:18.258 12:17:23 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:18.258 12:17:23 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:18.258 12:17:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.258 12:17:23 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:18.258 12:17:23 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:18.258 12:17:23 -- pm/common@17 -- $ local monitor 00:03:18.258 12:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.258 12:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.258 12:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.258 12:17:23 -- pm/common@21 -- $ date +%s 00:03:18.258 12:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.258 12:17:23 -- pm/common@21 -- $ date +%s 00:03:18.258 12:17:23 -- pm/common@25 -- $ sleep 1 00:03:18.258 12:17:23 -- pm/common@21 -- $ date +%s 00:03:18.258 12:17:23 -- pm/common@21 -- $ date +%s 00:03:18.258 12:17:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101443 00:03:18.258 12:17:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101443 00:03:18.258 12:17:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101443 00:03:18.258 12:17:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101443 00:03:18.258 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101443_collect-cpu-load.pm.log 00:03:18.258 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101443_collect-vmstat.pm.log 00:03:18.258 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101443_collect-cpu-temp.pm.log 00:03:18.258 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101443_collect-bmc-pm.bmc.pm.log 00:03:19.194 12:17:24 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:19.194 12:17:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:19.194 12:17:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:19.194 12:17:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:19.194 12:17:24 -- spdk/autobuild.sh@16 -- $ date -u 00:03:19.194 Wed Nov 20 11:17:24 AM UTC 2024 00:03:19.194 12:17:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:19.194 v25.01-pre-217-g92fb22519 00:03:19.194 12:17:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:19.194 12:17:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:19.194 12:17:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:19.194 12:17:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:19.194 12:17:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:19.194 12:17:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.194 ************************************ 00:03:19.194 START TEST ubsan 00:03:19.194 ************************************ 00:03:19.194 12:17:24 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:19.194 using ubsan 00:03:19.194 00:03:19.194 real 0m0.000s 00:03:19.194 user 0m0.000s 00:03:19.194 sys 0m0.000s 00:03:19.194 12:17:24 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:19.194 12:17:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:19.194 ************************************ 00:03:19.194 END TEST ubsan 00:03:19.194 ************************************ 00:03:19.194 12:17:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:19.194 12:17:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:19.194 12:17:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:19.194 12:17:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:19.453 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:03:19.453 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:19.712 Using 'verbs' RDMA provider 00:03:32.854 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:45.070 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:45.070 Creating mk/config.mk...done. 00:03:45.070 Creating mk/cc.flags.mk...done. 00:03:45.070 Type 'make' to build. 00:03:45.070 12:17:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j16 00:03:45.070 12:17:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:45.070 12:17:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:45.070 12:17:50 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.070 ************************************ 00:03:45.070 START TEST make 00:03:45.070 ************************************ 00:03:45.070 12:17:50 make -- common/autotest_common.sh@1129 -- $ make -j16 00:03:45.070 make[1]: Nothing to be done for 'all'. 00:03:57.298 The Meson build system 00:03:57.298 Version: 1.5.0 00:03:57.298 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:57.298 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:57.298 Build type: native build 00:03:57.298 Program cat found: YES (/usr/bin/cat) 00:03:57.298 Project name: DPDK 00:03:57.298 Project version: 24.03.0 00:03:57.298 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:57.298 C linker for the host machine: cc ld.bfd 2.40-14 00:03:57.298 Host machine cpu family: x86_64 00:03:57.298 Host machine cpu: x86_64 00:03:57.298 Message: ## Building in Developer Mode ## 00:03:57.298 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:57.298 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:57.298 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:57.298 Program python3 found: YES (/usr/bin/python3) 00:03:57.298 Program cat found: YES (/usr/bin/cat) 00:03:57.298 Compiler for C supports arguments -march=native: YES 00:03:57.298 Checking for size of "void *" : 8 00:03:57.298 Checking for size of "void *" : 8 (cached) 00:03:57.298 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:57.298 Library m found: YES 00:03:57.298 Library numa found: YES 00:03:57.298 Has header "numaif.h" : YES 00:03:57.298 Library fdt found: NO 00:03:57.298 Library execinfo found: NO 00:03:57.298 Has header "execinfo.h" : YES 00:03:57.298 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:57.298 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:57.298 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:57.298 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:57.298 Run-time dependency openssl found: YES 3.1.1 00:03:57.298 Run-time dependency libpcap found: YES 1.10.4 00:03:57.298 Has header "pcap.h" with dependency libpcap: YES 00:03:57.298 Compiler for C supports arguments -Wcast-qual: YES 00:03:57.298 Compiler for C supports arguments -Wdeprecated: YES 00:03:57.298 Compiler for C supports arguments -Wformat: YES 00:03:57.298 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:57.298 Compiler for C supports arguments -Wformat-security: NO 00:03:57.298 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:57.298 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:57.298 Compiler for C supports arguments -Wnested-externs: YES 00:03:57.298 Compiler for C supports arguments -Wold-style-definition: YES 00:03:57.298 Compiler for C supports arguments -Wpointer-arith: YES 00:03:57.298 Compiler for C supports arguments -Wsign-compare: YES 00:03:57.298 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:57.298 Compiler for C supports arguments -Wundef: YES 00:03:57.298 Compiler for C supports arguments -Wwrite-strings: YES 00:03:57.298 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:57.298 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:57.298 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:57.298 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:57.298 Program objdump found: YES (/usr/bin/objdump) 00:03:57.298 Compiler for C supports arguments -mavx512f: YES 00:03:57.298 Checking if "AVX512 checking" compiles: YES 00:03:57.298 Fetching value of define "__SSE4_2__" : 1 00:03:57.298 Fetching value of define "__AES__" : 1 00:03:57.298 Fetching value of define "__AVX__" : 1 00:03:57.298 Fetching value of define "__AVX2__" : (undefined) 00:03:57.299 Fetching value of define "__AVX512BW__" : (undefined) 00:03:57.299 Fetching value of define "__AVX512CD__" : (undefined) 00:03:57.299 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:57.299 Fetching value of define "__AVX512F__" : (undefined) 00:03:57.299 Fetching value of define "__AVX512VL__" : (undefined) 00:03:57.299 Fetching value of define "__PCLMUL__" : 1 00:03:57.299 Fetching value of define "__RDRND__" : (undefined) 00:03:57.299 Fetching value of define "__RDSEED__" : (undefined) 00:03:57.299 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:57.299 Fetching value of define "__znver1__" : (undefined) 00:03:57.299 Fetching value of define "__znver2__" : (undefined) 00:03:57.299 Fetching value of define "__znver3__" : (undefined) 00:03:57.299 Fetching value of define "__znver4__" : (undefined) 00:03:57.299 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:57.299 Message: lib/log: Defining dependency "log" 00:03:57.299 Message: lib/kvargs: Defining dependency "kvargs" 00:03:57.299 Message: lib/telemetry: Defining dependency "telemetry" 00:03:57.299 Checking for function "getentropy" : NO 00:03:57.299 Message: lib/eal: Defining dependency "eal" 00:03:57.299 Message: lib/ring: Defining dependency "ring" 00:03:57.299 Message: lib/rcu: Defining dependency "rcu" 00:03:57.299 Message: lib/mempool: Defining dependency "mempool" 00:03:57.299 Message: lib/mbuf: Defining dependency "mbuf" 00:03:57.299 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:57.299 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:57.299 Compiler for C supports arguments -mpclmul: YES 00:03:57.299 Compiler for C supports arguments -maes: YES 00:03:57.299 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:57.299 Compiler for C supports arguments -mavx512bw: YES 00:03:57.299 Compiler for C supports arguments -mavx512dq: YES 00:03:57.299 Compiler for C supports arguments -mavx512vl: YES 00:03:57.299 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:57.299 Compiler for C supports arguments -mavx2: YES 00:03:57.299 Compiler for C supports arguments -mavx: YES 00:03:57.299 Message: lib/net: Defining dependency "net" 00:03:57.299 Message: lib/meter: Defining dependency "meter" 00:03:57.299 Message: lib/ethdev: Defining dependency "ethdev" 00:03:57.299 Message: lib/pci: Defining dependency "pci" 00:03:57.299 Message: lib/cmdline: Defining dependency "cmdline" 00:03:57.299 Message: lib/hash: Defining dependency "hash" 00:03:57.299 Message: lib/timer: Defining dependency "timer" 00:03:57.299 Message: lib/compressdev: Defining dependency "compressdev" 00:03:57.299 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:57.299 Message: lib/dmadev: Defining dependency "dmadev" 00:03:57.299 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:57.299 Message: lib/power: Defining dependency "power" 00:03:57.299 Message: lib/reorder: Defining dependency "reorder" 00:03:57.299 Message: lib/security: Defining dependency "security" 00:03:57.299 Has header "linux/userfaultfd.h" : YES 00:03:57.299 Has header "linux/vduse.h" : YES 00:03:57.299 Message: lib/vhost: Defining dependency "vhost" 00:03:57.299 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:57.299 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:57.299 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:57.299 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:57.299 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:57.299 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:57.299 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:57.299 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:57.299 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:57.299 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:57.299 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:57.299 Configuring doxy-api-html.conf using configuration 00:03:57.299 Configuring doxy-api-man.conf using configuration 00:03:57.299 Program mandb found: YES (/usr/bin/mandb) 00:03:57.299 Program sphinx-build found: NO 00:03:57.299 Configuring rte_build_config.h using configuration 00:03:57.299 Message: 00:03:57.299 ================= 00:03:57.299 Applications Enabled 00:03:57.299 ================= 00:03:57.299 00:03:57.299 apps: 00:03:57.299 00:03:57.299 00:03:57.299 Message: 00:03:57.299 ================= 00:03:57.299 Libraries Enabled 00:03:57.299 ================= 00:03:57.299 00:03:57.299 libs: 00:03:57.299 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:57.299 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:57.299 cryptodev, dmadev, power, reorder, security, vhost, 00:03:57.299 00:03:57.299 Message: 00:03:57.299 =============== 00:03:57.299 Drivers Enabled 00:03:57.299 =============== 00:03:57.299 00:03:57.299 common: 00:03:57.299 00:03:57.299 bus: 00:03:57.299 pci, vdev, 00:03:57.299 mempool: 00:03:57.299 ring, 00:03:57.299 dma: 00:03:57.299 00:03:57.299 net: 00:03:57.299 00:03:57.299 crypto: 00:03:57.299 00:03:57.299 compress: 00:03:57.299 00:03:57.299 vdpa: 00:03:57.299 00:03:57.299 00:03:57.299 Message: 00:03:57.299 ================= 00:03:57.299 Content Skipped 00:03:57.299 ================= 00:03:57.299 00:03:57.299 apps: 00:03:57.299 dumpcap: explicitly disabled via build config 00:03:57.299 graph: explicitly disabled via build config 00:03:57.299 pdump: explicitly disabled via build config 00:03:57.299 proc-info: explicitly disabled via build config 00:03:57.299 test-acl: explicitly disabled via build config 00:03:57.299 test-bbdev: explicitly disabled via build config 00:03:57.299 test-cmdline: explicitly disabled via build config 00:03:57.299 test-compress-perf: explicitly disabled via build config 00:03:57.299 test-crypto-perf: explicitly disabled via build config 00:03:57.299 test-dma-perf: explicitly disabled via build config 00:03:57.299 test-eventdev: explicitly disabled via build config 00:03:57.299 test-fib: explicitly disabled via build config 00:03:57.299 test-flow-perf: explicitly disabled via build config 00:03:57.299 test-gpudev: explicitly disabled via build config 00:03:57.299 test-mldev: explicitly disabled via build config 00:03:57.299 test-pipeline: explicitly disabled via build config 00:03:57.299 test-pmd: explicitly disabled via build config 00:03:57.299 test-regex: explicitly disabled via build config 00:03:57.299 test-sad: explicitly disabled via build config 00:03:57.299 test-security-perf: explicitly disabled via build config 00:03:57.299 00:03:57.299 libs: 00:03:57.299 argparse: explicitly disabled via build config 00:03:57.299 metrics: explicitly disabled via build config 00:03:57.299 acl: explicitly disabled via build config 00:03:57.299 bbdev: explicitly disabled via build config 00:03:57.299 bitratestats: explicitly disabled via build config 00:03:57.299 bpf: explicitly disabled via build config 00:03:57.299 cfgfile: explicitly disabled via build config 00:03:57.299 distributor: explicitly disabled via build config 00:03:57.299 efd: explicitly disabled via build config 00:03:57.299 eventdev: explicitly disabled via build config 00:03:57.299 dispatcher: explicitly disabled via build config 00:03:57.299 gpudev: explicitly disabled via build config 00:03:57.299 gro: explicitly disabled via build config 00:03:57.299 gso: explicitly disabled via build config 00:03:57.299 ip_frag: explicitly disabled via build config 00:03:57.299 jobstats: explicitly disabled via build config 00:03:57.299 latencystats: explicitly disabled via build config 00:03:57.299 lpm: explicitly disabled via build config 00:03:57.299 member: explicitly disabled via build config 00:03:57.299 pcapng: explicitly disabled via build config 00:03:57.299 rawdev: explicitly disabled via build config 00:03:57.299 regexdev: explicitly disabled via build config 00:03:57.299 mldev: explicitly disabled via build config 00:03:57.299 rib: explicitly disabled via build config 00:03:57.299 sched: explicitly disabled via build config 00:03:57.299 stack: explicitly disabled via build config 00:03:57.299 ipsec: explicitly disabled via build config 00:03:57.299 pdcp: explicitly disabled via build config 00:03:57.299 fib: explicitly disabled via build config 00:03:57.299 port: explicitly disabled via build config 00:03:57.299 pdump: explicitly disabled via build config 00:03:57.299 table: explicitly disabled via build config 00:03:57.299 pipeline: explicitly disabled via build config 00:03:57.299 graph: explicitly disabled via build config 00:03:57.299 node: explicitly disabled via build config 00:03:57.299 00:03:57.300 drivers: 00:03:57.300 common/cpt: not in enabled drivers build config 00:03:57.300 common/dpaax: not in enabled drivers build config 00:03:57.300 common/iavf: not in enabled drivers build config 00:03:57.300 common/idpf: not in enabled drivers build config 00:03:57.300 common/ionic: not in enabled drivers build config 00:03:57.300 common/mvep: not in enabled drivers build config 00:03:57.300 common/octeontx: not in enabled drivers build config 00:03:57.300 bus/auxiliary: not in enabled drivers build config 00:03:57.300 bus/cdx: not in enabled drivers build config 00:03:57.300 bus/dpaa: not in enabled drivers build config 00:03:57.300 bus/fslmc: not in enabled drivers build config 00:03:57.300 bus/ifpga: not in enabled drivers build config 00:03:57.300 bus/platform: not in enabled drivers build config 00:03:57.300 bus/uacce: not in enabled drivers build config 00:03:57.300 bus/vmbus: not in enabled drivers build config 00:03:57.300 common/cnxk: not in enabled drivers build config 00:03:57.300 common/mlx5: not in enabled drivers build config 00:03:57.300 common/nfp: not in enabled drivers build config 00:03:57.300 common/nitrox: not in enabled drivers build config 00:03:57.300 common/qat: not in enabled drivers build config 00:03:57.300 common/sfc_efx: not in enabled drivers build config 00:03:57.300 mempool/bucket: not in enabled drivers build config 00:03:57.300 mempool/cnxk: not in enabled drivers build config 00:03:57.300 mempool/dpaa: not in enabled drivers build config 00:03:57.300 mempool/dpaa2: not in enabled drivers build config 00:03:57.300 mempool/octeontx: not in enabled drivers build config 00:03:57.300 mempool/stack: not in enabled drivers build config 00:03:57.300 dma/cnxk: not in enabled drivers build config 00:03:57.300 dma/dpaa: not in enabled drivers build config 00:03:57.300 dma/dpaa2: not in enabled drivers build config 00:03:57.300 dma/hisilicon: not in enabled drivers build config 00:03:57.300 dma/idxd: not in enabled drivers build config 00:03:57.300 dma/ioat: not in enabled drivers build config 00:03:57.300 dma/skeleton: not in enabled drivers build config 00:03:57.300 net/af_packet: not in enabled drivers build config 00:03:57.300 net/af_xdp: not in enabled drivers build config 00:03:57.300 net/ark: not in enabled drivers build config 00:03:57.300 net/atlantic: not in enabled drivers build config 00:03:57.300 net/avp: not in enabled drivers build config 00:03:57.300 net/axgbe: not in enabled drivers build config 00:03:57.300 net/bnx2x: not in enabled drivers build config 00:03:57.300 net/bnxt: not in enabled drivers build config 00:03:57.300 net/bonding: not in enabled drivers build config 00:03:57.300 net/cnxk: not in enabled drivers build config 00:03:57.300 net/cpfl: not in enabled drivers build config 00:03:57.300 net/cxgbe: not in enabled drivers build config 00:03:57.300 net/dpaa: not in enabled drivers build config 00:03:57.300 net/dpaa2: not in enabled drivers build config 00:03:57.300 net/e1000: not in enabled drivers build config 00:03:57.300 net/ena: not in enabled drivers build config 00:03:57.300 net/enetc: not in enabled drivers build config 00:03:57.300 net/enetfec: not in enabled drivers build config 00:03:57.300 net/enic: not in enabled drivers build config 00:03:57.300 net/failsafe: not in enabled drivers build config 00:03:57.300 net/fm10k: not in enabled drivers build config 00:03:57.300 net/gve: not in enabled drivers build config 00:03:57.300 net/hinic: not in enabled drivers build config 00:03:57.300 net/hns3: not in enabled drivers build config 00:03:57.300 net/i40e: not in enabled drivers build config 00:03:57.300 net/iavf: not in enabled drivers build config 00:03:57.300 net/ice: not in enabled drivers build config 00:03:57.300 net/idpf: not in enabled drivers build config 00:03:57.300 net/igc: not in enabled drivers build config 00:03:57.300 net/ionic: not in enabled drivers build config 00:03:57.300 net/ipn3ke: not in enabled drivers build config 00:03:57.300 net/ixgbe: not in enabled drivers build config 00:03:57.300 net/mana: not in enabled drivers build config 00:03:57.300 net/memif: not in enabled drivers build config 00:03:57.300 net/mlx4: not in enabled drivers build config 00:03:57.300 net/mlx5: not in enabled drivers build config 00:03:57.300 net/mvneta: not in enabled drivers build config 00:03:57.300 net/mvpp2: not in enabled drivers build config 00:03:57.300 net/netvsc: not in enabled drivers build config 00:03:57.300 net/nfb: not in enabled drivers build config 00:03:57.300 net/nfp: not in enabled drivers build config 00:03:57.300 net/ngbe: not in enabled drivers build config 00:03:57.300 net/null: not in enabled drivers build config 00:03:57.300 net/octeontx: not in enabled drivers build config 00:03:57.300 net/octeon_ep: not in enabled drivers build config 00:03:57.300 net/pcap: not in enabled drivers build config 00:03:57.300 net/pfe: not in enabled drivers build config 00:03:57.300 net/qede: not in enabled drivers build config 00:03:57.300 net/ring: not in enabled drivers build config 00:03:57.300 net/sfc: not in enabled drivers build config 00:03:57.300 net/softnic: not in enabled drivers build config 00:03:57.300 net/tap: not in enabled drivers build config 00:03:57.300 net/thunderx: not in enabled drivers build config 00:03:57.300 net/txgbe: not in enabled drivers build config 00:03:57.300 net/vdev_netvsc: not in enabled drivers build config 00:03:57.300 net/vhost: not in enabled drivers build config 00:03:57.300 net/virtio: not in enabled drivers build config 00:03:57.300 net/vmxnet3: not in enabled drivers build config 00:03:57.300 raw/*: missing internal dependency, "rawdev" 00:03:57.300 crypto/armv8: not in enabled drivers build config 00:03:57.300 crypto/bcmfs: not in enabled drivers build config 00:03:57.300 crypto/caam_jr: not in enabled drivers build config 00:03:57.300 crypto/ccp: not in enabled drivers build config 00:03:57.300 crypto/cnxk: not in enabled drivers build config 00:03:57.300 crypto/dpaa_sec: not in enabled drivers build config 00:03:57.300 crypto/dpaa2_sec: not in enabled drivers build config 00:03:57.300 crypto/ipsec_mb: not in enabled drivers build config 00:03:57.300 crypto/mlx5: not in enabled drivers build config 00:03:57.300 crypto/mvsam: not in enabled drivers build config 00:03:57.300 crypto/nitrox: not in enabled drivers build config 00:03:57.300 crypto/null: not in enabled drivers build config 00:03:57.300 crypto/octeontx: not in enabled drivers build config 00:03:57.300 crypto/openssl: not in enabled drivers build config 00:03:57.300 crypto/scheduler: not in enabled drivers build config 00:03:57.300 crypto/uadk: not in enabled drivers build config 00:03:57.300 crypto/virtio: not in enabled drivers build config 00:03:57.300 compress/isal: not in enabled drivers build config 00:03:57.300 compress/mlx5: not in enabled drivers build config 00:03:57.300 compress/nitrox: not in enabled drivers build config 00:03:57.300 compress/octeontx: not in enabled drivers build config 00:03:57.300 compress/zlib: not in enabled drivers build config 00:03:57.300 regex/*: missing internal dependency, "regexdev" 00:03:57.300 ml/*: missing internal dependency, "mldev" 00:03:57.300 vdpa/ifc: not in enabled drivers build config 00:03:57.300 vdpa/mlx5: not in enabled drivers build config 00:03:57.300 vdpa/nfp: not in enabled drivers build config 00:03:57.300 vdpa/sfc: not in enabled drivers build config 00:03:57.300 event/*: missing internal dependency, "eventdev" 00:03:57.300 baseband/*: missing internal dependency, "bbdev" 00:03:57.300 gpu/*: missing internal dependency, "gpudev" 00:03:57.301 00:03:57.301 00:03:57.301 Build targets in project: 85 00:03:57.301 00:03:57.301 DPDK 24.03.0 00:03:57.301 00:03:57.301 User defined options 00:03:57.301 buildtype : debug 00:03:57.301 default_library : shared 00:03:57.301 libdir : lib 00:03:57.301 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:57.301 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:57.301 c_link_args : 00:03:57.301 cpu_instruction_set: native 00:03:57.301 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:57.301 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:57.301 enable_docs : false 00:03:57.301 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:57.301 enable_kmods : false 00:03:57.301 max_lcores : 128 00:03:57.301 tests : false 00:03:57.301 00:03:57.301 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:57.301 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:57.301 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:57.301 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:57.301 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:57.301 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:57.301 [5/268] Linking static target lib/librte_log.a 00:03:57.301 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:57.301 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:57.301 [8/268] Linking static target lib/librte_kvargs.a 00:03:57.875 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.875 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:57.875 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:57.875 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:57.875 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:57.875 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:58.139 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:58.139 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:58.139 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:58.139 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:58.139 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:58.139 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:58.139 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:58.399 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.399 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:58.399 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:58.399 [25/268] Linking static target lib/librte_telemetry.a 00:03:58.399 [26/268] Linking target lib/librte_log.so.24.1 00:03:58.399 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:58.399 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:58.399 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:58.399 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:58.399 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:58.663 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:58.663 [33/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:58.663 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:58.663 [35/268] Linking target lib/librte_kvargs.so.24.1 00:03:58.663 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:58.924 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:58.924 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:58.924 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:58.924 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:58.924 [41/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:58.924 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:58.924 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:59.185 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:59.185 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:59.185 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:59.185 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:59.185 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:59.185 [49/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.185 [50/268] Linking target lib/librte_telemetry.so.24.1 00:03:59.443 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:59.443 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:59.443 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:59.443 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:59.443 [55/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:59.704 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:59.704 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:59.704 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:59.705 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:59.705 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:59.705 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:59.965 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:59.965 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:59.965 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:59.965 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:59.965 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:59.965 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:59.965 [68/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:00.223 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:00.223 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:00.223 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:00.505 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:00.505 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:00.505 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:00.505 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:00.505 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:00.772 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:00.772 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:00.772 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:00.772 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:00.772 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:00.772 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:00.772 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:01.039 [84/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:01.039 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:01.039 [86/268] Linking static target lib/librte_rcu.a 00:04:01.039 [87/268] Linking static target lib/librte_eal.a 00:04:01.039 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:01.039 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:01.299 [90/268] Linking static target lib/librte_ring.a 00:04:01.299 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:01.299 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:01.299 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:01.563 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:01.563 [95/268] Linking static target lib/librte_mempool.a 00:04:01.563 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:01.563 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:01.563 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:01.563 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:01.563 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.824 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.824 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:01.824 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:01.824 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:02.082 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:02.082 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:02.082 [107/268] Linking static target lib/librte_net.a 00:04:02.343 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:02.343 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:02.343 [110/268] Linking static target lib/librte_meter.a 00:04:02.343 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:02.602 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:02.602 [113/268] Linking static target lib/librte_mbuf.a 00:04:02.602 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.602 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.602 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:02.602 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.872 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:02.872 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:02.872 [120/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:02.872 [121/268] Linking static target lib/librte_pci.a 00:04:03.173 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:03.173 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:03.173 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:03.173 [125/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:03.173 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:03.173 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:03.173 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:03.173 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:03.173 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:03.443 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:03.443 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:03.443 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:03.443 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:03.443 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:03.443 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.443 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:03.443 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:03.443 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:03.443 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:03.443 [141/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.700 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:03.700 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:03.700 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:03.961 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:03.961 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:03.961 [147/268] Linking static target lib/librte_cmdline.a 00:04:04.219 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:04.480 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:04.480 [150/268] Linking static target lib/librte_timer.a 00:04:04.480 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:04.480 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:04.480 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:04.480 [154/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:04.480 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:04.480 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:04.738 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:04.738 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:04.738 [159/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:04.738 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:04.738 [161/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:04.738 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:04.738 [163/268] Linking static target lib/librte_ethdev.a 00:04:04.738 [164/268] Linking static target lib/librte_hash.a 00:04:04.738 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:04.738 [166/268] Linking static target lib/librte_compressdev.a 00:04:05.004 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:05.004 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:05.004 [169/268] Linking static target lib/librte_dmadev.a 00:04:05.004 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.266 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:05.525 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:05.525 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:05.525 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:05.525 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:05.525 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:05.525 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:05.525 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.525 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:05.785 [180/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.045 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:06.045 [182/268] Linking static target lib/librte_power.a 00:04:06.045 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.045 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.306 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:06.306 [186/268] Linking static target lib/librte_cryptodev.a 00:04:06.306 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:06.306 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:06.306 [189/268] Linking static target lib/librte_reorder.a 00:04:06.306 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:06.572 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:06.572 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:06.572 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:06.839 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:06.839 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:06.839 [196/268] Linking static target lib/librte_security.a 00:04:06.839 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:06.839 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:06.839 [199/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:07.098 [200/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.098 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:07.098 [202/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.357 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:07.357 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:07.357 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:07.357 [206/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.616 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:07.616 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:07.616 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:07.616 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:07.879 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:07.879 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:07.879 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:07.879 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:07.879 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:07.879 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:07.879 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:07.879 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:07.879 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:07.879 [220/268] Linking static target drivers/librte_bus_pci.a 00:04:07.879 [221/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.136 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:08.136 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:08.136 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:08.136 [225/268] Linking static target drivers/librte_mempool_ring.a 00:04:08.136 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.395 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.293 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:10.293 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.293 [230/268] Linking target lib/librte_eal.so.24.1 00:04:10.552 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:10.552 [232/268] Linking target lib/librte_meter.so.24.1 00:04:10.552 [233/268] Linking target lib/librte_dmadev.so.24.1 00:04:10.552 [234/268] Linking target lib/librte_ring.so.24.1 00:04:10.552 [235/268] Linking target lib/librte_timer.so.24.1 00:04:10.552 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:10.552 [237/268] Linking target lib/librte_pci.so.24.1 00:04:10.552 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.810 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:10.810 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:10.810 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:10.810 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:10.810 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:10.810 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:10.810 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:10.810 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:10.810 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:10.810 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:10.810 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:10.810 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:11.069 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:11.069 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:11.069 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:11.069 [254/268] Linking target lib/librte_net.so.24.1 00:04:11.069 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:11.327 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:11.327 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:11.327 [258/268] Linking target lib/librte_hash.so.24.1 00:04:11.327 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:11.327 [260/268] Linking target lib/librte_security.so.24.1 00:04:11.327 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:11.327 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:11.585 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:11.585 [264/268] Linking target lib/librte_power.so.24.1 00:04:16.848 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:16.848 [266/268] Linking static target lib/librte_vhost.a 00:04:18.222 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.222 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:18.222 INFO: autodetecting backend as ninja 00:04:18.222 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 16 00:04:44.774 CC lib/log/log.o 00:04:44.774 CC lib/ut_mock/mock.o 00:04:44.774 CC lib/log/log_flags.o 00:04:44.774 CC lib/log/log_deprecated.o 00:04:44.774 CC lib/ut/ut.o 00:04:44.774 LIB libspdk_ut.a 00:04:44.774 LIB libspdk_ut_mock.a 00:04:44.774 LIB libspdk_log.a 00:04:44.774 SO libspdk_ut_mock.so.6.0 00:04:44.774 SO libspdk_ut.so.2.0 00:04:44.774 SO libspdk_log.so.7.1 00:04:44.774 SYMLINK libspdk_ut_mock.so 00:04:44.774 SYMLINK libspdk_ut.so 00:04:44.774 SYMLINK libspdk_log.so 00:04:44.774 CXX lib/trace_parser/trace.o 00:04:44.774 CC lib/dma/dma.o 00:04:44.774 CC lib/util/base64.o 00:04:44.774 CC lib/util/bit_array.o 00:04:44.774 CC lib/util/cpuset.o 00:04:44.774 CC lib/ioat/ioat.o 00:04:44.774 CC lib/util/crc16.o 00:04:44.774 CC lib/util/crc32.o 00:04:44.774 CC lib/util/crc32c.o 00:04:44.774 CC lib/util/crc32_ieee.o 00:04:44.774 CC lib/util/crc64.o 00:04:44.774 CC lib/util/dif.o 00:04:44.774 CC lib/util/fd.o 00:04:44.774 CC lib/util/fd_group.o 00:04:44.774 CC lib/util/file.o 00:04:44.774 CC lib/vfio_user/host/vfio_user_pci.o 00:04:44.774 CC lib/util/hexlify.o 00:04:44.774 CC lib/vfio_user/host/vfio_user.o 00:04:44.774 CC lib/util/iov.o 00:04:44.774 CC lib/util/math.o 00:04:44.774 CC lib/util/net.o 00:04:44.774 CC lib/util/pipe.o 00:04:44.774 CC lib/util/strerror_tls.o 00:04:44.774 CC lib/util/string.o 00:04:44.774 LIB libspdk_dma.a 00:04:44.774 CC lib/util/uuid.o 00:04:44.774 CC lib/util/xor.o 00:04:44.774 SO libspdk_dma.so.5.0 00:04:44.774 CC lib/util/zipf.o 00:04:44.774 SYMLINK libspdk_dma.so 00:04:44.774 CC lib/util/md5.o 00:04:44.774 LIB libspdk_ioat.a 00:04:44.774 SO libspdk_ioat.so.7.0 00:04:44.774 LIB libspdk_vfio_user.a 00:04:44.774 SYMLINK libspdk_ioat.so 00:04:44.774 SO libspdk_vfio_user.so.5.0 00:04:44.774 SYMLINK libspdk_vfio_user.so 00:04:44.774 LIB libspdk_util.a 00:04:44.774 SO libspdk_util.so.10.1 00:04:44.774 SYMLINK libspdk_util.so 00:04:44.774 CC lib/conf/conf.o 00:04:44.774 CC lib/json/json_parse.o 00:04:44.774 CC lib/idxd/idxd.o 00:04:44.774 CC lib/rdma_utils/rdma_utils.o 00:04:44.774 CC lib/json/json_util.o 00:04:44.774 CC lib/idxd/idxd_user.o 00:04:44.774 CC lib/json/json_write.o 00:04:44.774 CC lib/idxd/idxd_kernel.o 00:04:44.774 CC lib/vmd/vmd.o 00:04:44.774 CC lib/env_dpdk/env.o 00:04:44.774 CC lib/vmd/led.o 00:04:44.774 CC lib/env_dpdk/memory.o 00:04:44.774 CC lib/env_dpdk/pci.o 00:04:44.774 CC lib/env_dpdk/init.o 00:04:44.774 CC lib/env_dpdk/threads.o 00:04:44.774 LIB libspdk_trace_parser.a 00:04:44.774 SO libspdk_trace_parser.so.6.0 00:04:44.774 SYMLINK libspdk_trace_parser.so 00:04:44.775 CC lib/env_dpdk/pci_ioat.o 00:04:44.775 CC lib/env_dpdk/pci_virtio.o 00:04:44.775 CC lib/env_dpdk/pci_vmd.o 00:04:44.775 CC lib/env_dpdk/pci_idxd.o 00:04:44.775 LIB libspdk_conf.a 00:04:44.775 CC lib/env_dpdk/pci_event.o 00:04:44.775 CC lib/env_dpdk/sigbus_handler.o 00:04:44.775 CC lib/env_dpdk/pci_dpdk.o 00:04:44.775 SO libspdk_conf.so.6.0 00:04:44.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:44.775 LIB libspdk_rdma_utils.a 00:04:44.775 LIB libspdk_json.a 00:04:44.775 SO libspdk_rdma_utils.so.1.0 00:04:44.775 SYMLINK libspdk_conf.so 00:04:44.775 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:44.775 SO libspdk_json.so.6.0 00:04:44.775 SYMLINK libspdk_rdma_utils.so 00:04:44.775 SYMLINK libspdk_json.so 00:04:44.775 CC lib/rdma_provider/common.o 00:04:44.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:44.775 CC lib/jsonrpc/jsonrpc_server.o 00:04:44.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:44.775 CC lib/jsonrpc/jsonrpc_client.o 00:04:44.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:44.775 LIB libspdk_idxd.a 00:04:44.775 SO libspdk_idxd.so.12.1 00:04:44.775 LIB libspdk_vmd.a 00:04:44.775 SYMLINK libspdk_idxd.so 00:04:44.775 SO libspdk_vmd.so.6.0 00:04:44.775 LIB libspdk_rdma_provider.a 00:04:44.775 SYMLINK libspdk_vmd.so 00:04:44.775 SO libspdk_rdma_provider.so.7.0 00:04:44.775 LIB libspdk_jsonrpc.a 00:04:44.775 SO libspdk_jsonrpc.so.6.0 00:04:44.775 SYMLINK libspdk_rdma_provider.so 00:04:44.775 SYMLINK libspdk_jsonrpc.so 00:04:44.775 CC lib/rpc/rpc.o 00:04:44.775 LIB libspdk_rpc.a 00:04:44.775 SO libspdk_rpc.so.6.0 00:04:44.775 SYMLINK libspdk_rpc.so 00:04:44.775 CC lib/trace/trace.o 00:04:44.775 CC lib/keyring/keyring.o 00:04:44.775 CC lib/trace/trace_flags.o 00:04:44.775 CC lib/keyring/keyring_rpc.o 00:04:44.775 CC lib/trace/trace_rpc.o 00:04:44.775 CC lib/notify/notify.o 00:04:44.775 CC lib/notify/notify_rpc.o 00:04:44.775 LIB libspdk_notify.a 00:04:44.775 SO libspdk_notify.so.6.0 00:04:44.775 SYMLINK libspdk_notify.so 00:04:44.775 LIB libspdk_keyring.a 00:04:44.775 LIB libspdk_trace.a 00:04:44.775 SO libspdk_keyring.so.2.0 00:04:44.775 SO libspdk_trace.so.11.0 00:04:44.775 SYMLINK libspdk_keyring.so 00:04:44.775 SYMLINK libspdk_trace.so 00:04:44.775 CC lib/sock/sock.o 00:04:44.775 CC lib/sock/sock_rpc.o 00:04:44.775 CC lib/thread/thread.o 00:04:44.775 CC lib/thread/iobuf.o 00:04:44.775 LIB libspdk_env_dpdk.a 00:04:44.775 SO libspdk_env_dpdk.so.15.1 00:04:44.775 SYMLINK libspdk_env_dpdk.so 00:04:44.775 LIB libspdk_sock.a 00:04:44.775 SO libspdk_sock.so.10.0 00:04:44.775 SYMLINK libspdk_sock.so 00:04:45.036 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:45.036 CC lib/nvme/nvme_ctrlr.o 00:04:45.036 CC lib/nvme/nvme_fabric.o 00:04:45.036 CC lib/nvme/nvme_ns_cmd.o 00:04:45.036 CC lib/nvme/nvme_ns.o 00:04:45.036 CC lib/nvme/nvme_pcie_common.o 00:04:45.036 CC lib/nvme/nvme_pcie.o 00:04:45.036 CC lib/nvme/nvme_qpair.o 00:04:45.036 CC lib/nvme/nvme.o 00:04:45.036 CC lib/nvme/nvme_quirks.o 00:04:45.036 CC lib/nvme/nvme_transport.o 00:04:45.036 CC lib/nvme/nvme_discovery.o 00:04:45.037 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:45.037 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:45.037 CC lib/nvme/nvme_tcp.o 00:04:45.981 CC lib/nvme/nvme_opal.o 00:04:45.981 CC lib/nvme/nvme_io_msg.o 00:04:45.981 CC lib/nvme/nvme_poll_group.o 00:04:45.981 CC lib/nvme/nvme_zns.o 00:04:45.981 CC lib/nvme/nvme_stubs.o 00:04:45.981 CC lib/nvme/nvme_auth.o 00:04:46.243 LIB libspdk_thread.a 00:04:46.243 CC lib/nvme/nvme_cuse.o 00:04:46.243 CC lib/nvme/nvme_rdma.o 00:04:46.243 SO libspdk_thread.so.11.0 00:04:46.243 SYMLINK libspdk_thread.so 00:04:46.505 CC lib/init/json_config.o 00:04:46.505 CC lib/blob/blobstore.o 00:04:46.505 CC lib/accel/accel.o 00:04:46.505 CC lib/virtio/virtio.o 00:04:46.505 CC lib/init/subsystem.o 00:04:46.505 CC lib/fsdev/fsdev.o 00:04:46.505 CC lib/blob/request.o 00:04:46.767 CC lib/fsdev/fsdev_io.o 00:04:46.767 CC lib/init/subsystem_rpc.o 00:04:46.767 CC lib/virtio/virtio_vhost_user.o 00:04:46.767 CC lib/accel/accel_rpc.o 00:04:46.767 CC lib/blob/zeroes.o 00:04:46.767 CC lib/blob/blob_bs_dev.o 00:04:46.767 CC lib/fsdev/fsdev_rpc.o 00:04:47.028 CC lib/init/rpc.o 00:04:47.028 CC lib/accel/accel_sw.o 00:04:47.028 CC lib/virtio/virtio_vfio_user.o 00:04:47.028 CC lib/virtio/virtio_pci.o 00:04:47.286 LIB libspdk_init.a 00:04:47.286 SO libspdk_init.so.6.0 00:04:47.286 LIB libspdk_fsdev.a 00:04:47.286 SYMLINK libspdk_init.so 00:04:47.286 SO libspdk_fsdev.so.2.0 00:04:47.286 SYMLINK libspdk_fsdev.so 00:04:47.544 LIB libspdk_virtio.a 00:04:47.544 SO libspdk_virtio.so.7.0 00:04:47.544 CC lib/event/app.o 00:04:47.544 CC lib/event/reactor.o 00:04:47.544 CC lib/event/log_rpc.o 00:04:47.544 CC lib/event/app_rpc.o 00:04:47.544 CC lib/event/scheduler_static.o 00:04:47.544 SYMLINK libspdk_virtio.so 00:04:47.544 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:47.802 LIB libspdk_accel.a 00:04:47.802 SO libspdk_accel.so.16.0 00:04:47.802 SYMLINK libspdk_accel.so 00:04:47.802 LIB libspdk_nvme.a 00:04:48.060 LIB libspdk_event.a 00:04:48.060 SO libspdk_event.so.14.0 00:04:48.060 SO libspdk_nvme.so.15.0 00:04:48.060 CC lib/bdev/bdev.o 00:04:48.060 CC lib/bdev/bdev_rpc.o 00:04:48.060 CC lib/bdev/bdev_zone.o 00:04:48.060 CC lib/bdev/part.o 00:04:48.060 CC lib/bdev/scsi_nvme.o 00:04:48.060 SYMLINK libspdk_event.so 00:04:48.318 SYMLINK libspdk_nvme.so 00:04:48.318 LIB libspdk_fuse_dispatcher.a 00:04:48.318 SO libspdk_fuse_dispatcher.so.1.0 00:04:48.318 SYMLINK libspdk_fuse_dispatcher.so 00:04:50.218 LIB libspdk_blob.a 00:04:50.218 SO libspdk_blob.so.11.0 00:04:50.218 SYMLINK libspdk_blob.so 00:04:50.476 CC lib/lvol/lvol.o 00:04:50.476 CC lib/blobfs/blobfs.o 00:04:50.476 CC lib/blobfs/tree.o 00:04:51.414 LIB libspdk_bdev.a 00:04:51.414 SO libspdk_bdev.so.17.0 00:04:51.414 SYMLINK libspdk_bdev.so 00:04:51.414 LIB libspdk_blobfs.a 00:04:51.414 SO libspdk_blobfs.so.10.0 00:04:51.415 SYMLINK libspdk_blobfs.so 00:04:51.415 CC lib/nbd/nbd.o 00:04:51.415 CC lib/nvmf/ctrlr.o 00:04:51.415 LIB libspdk_lvol.a 00:04:51.415 CC lib/nbd/nbd_rpc.o 00:04:51.415 CC lib/ublk/ublk.o 00:04:51.415 CC lib/nvmf/ctrlr_discovery.o 00:04:51.415 CC lib/scsi/dev.o 00:04:51.415 CC lib/ublk/ublk_rpc.o 00:04:51.415 CC lib/ftl/ftl_core.o 00:04:51.415 CC lib/scsi/lun.o 00:04:51.415 CC lib/nvmf/ctrlr_bdev.o 00:04:51.415 CC lib/scsi/port.o 00:04:51.415 CC lib/ftl/ftl_init.o 00:04:51.415 CC lib/nvmf/subsystem.o 00:04:51.415 CC lib/scsi/scsi.o 00:04:51.415 CC lib/ftl/ftl_layout.o 00:04:51.674 SO libspdk_lvol.so.10.0 00:04:51.674 SYMLINK libspdk_lvol.so 00:04:51.674 CC lib/nvmf/nvmf.o 00:04:51.674 CC lib/ftl/ftl_debug.o 00:04:51.674 CC lib/nvmf/nvmf_rpc.o 00:04:51.674 CC lib/ftl/ftl_io.o 00:04:51.674 CC lib/nvmf/transport.o 00:04:51.934 CC lib/ftl/ftl_sb.o 00:04:51.934 CC lib/scsi/scsi_bdev.o 00:04:51.934 CC lib/nvmf/tcp.o 00:04:51.934 CC lib/ftl/ftl_l2p.o 00:04:51.934 CC lib/nvmf/stubs.o 00:04:51.934 LIB libspdk_nbd.a 00:04:51.934 CC lib/ftl/ftl_l2p_flat.o 00:04:52.195 SO libspdk_nbd.so.7.0 00:04:52.195 CC lib/scsi/scsi_pr.o 00:04:52.195 CC lib/nvmf/mdns_server.o 00:04:52.195 SYMLINK libspdk_nbd.so 00:04:52.195 CC lib/nvmf/rdma.o 00:04:52.195 CC lib/nvmf/auth.o 00:04:52.195 CC lib/ftl/ftl_nv_cache.o 00:04:52.195 LIB libspdk_ublk.a 00:04:52.195 CC lib/ftl/ftl_band.o 00:04:52.461 SO libspdk_ublk.so.3.0 00:04:52.461 CC lib/scsi/scsi_rpc.o 00:04:52.461 SYMLINK libspdk_ublk.so 00:04:52.461 CC lib/scsi/task.o 00:04:52.461 CC lib/ftl/ftl_band_ops.o 00:04:52.461 CC lib/ftl/ftl_writer.o 00:04:52.461 CC lib/ftl/ftl_rq.o 00:04:52.768 CC lib/ftl/ftl_reloc.o 00:04:52.768 CC lib/ftl/ftl_l2p_cache.o 00:04:52.768 CC lib/ftl/ftl_p2l.o 00:04:52.768 LIB libspdk_scsi.a 00:04:52.768 CC lib/ftl/ftl_p2l_log.o 00:04:52.768 SO libspdk_scsi.so.9.0 00:04:52.768 CC lib/ftl/mngt/ftl_mngt.o 00:04:52.768 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:52.768 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:52.768 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:52.768 SYMLINK libspdk_scsi.so 00:04:52.768 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:52.768 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:53.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:53.033 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:53.033 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:53.033 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:53.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:53.296 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:53.296 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:53.296 CC lib/ftl/utils/ftl_conf.o 00:04:53.296 CC lib/iscsi/conn.o 00:04:53.296 CC lib/iscsi/init_grp.o 00:04:53.296 CC lib/vhost/vhost.o 00:04:53.296 CC lib/vhost/vhost_rpc.o 00:04:53.296 CC lib/iscsi/iscsi.o 00:04:53.296 CC lib/iscsi/param.o 00:04:53.296 CC lib/vhost/vhost_scsi.o 00:04:53.557 CC lib/vhost/vhost_blk.o 00:04:53.557 CC lib/ftl/utils/ftl_md.o 00:04:53.557 CC lib/ftl/utils/ftl_mempool.o 00:04:53.557 CC lib/iscsi/portal_grp.o 00:04:53.557 CC lib/vhost/rte_vhost_user.o 00:04:53.557 CC lib/ftl/utils/ftl_bitmap.o 00:04:53.557 CC lib/ftl/utils/ftl_property.o 00:04:53.815 CC lib/iscsi/tgt_node.o 00:04:53.815 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:53.815 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:53.815 CC lib/iscsi/iscsi_subsystem.o 00:04:53.815 CC lib/iscsi/iscsi_rpc.o 00:04:53.815 CC lib/iscsi/task.o 00:04:54.077 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:54.077 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:54.077 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:54.077 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:54.077 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:54.077 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:54.337 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:54.337 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:54.337 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:54.337 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:54.337 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:54.337 CC lib/ftl/base/ftl_base_dev.o 00:04:54.337 CC lib/ftl/base/ftl_base_bdev.o 00:04:54.337 CC lib/ftl/ftl_trace.o 00:04:54.597 LIB libspdk_ftl.a 00:04:54.597 LIB libspdk_nvmf.a 00:04:54.856 SO libspdk_nvmf.so.20.0 00:04:54.856 SO libspdk_ftl.so.9.0 00:04:54.856 LIB libspdk_vhost.a 00:04:54.856 SYMLINK libspdk_nvmf.so 00:04:55.115 LIB libspdk_iscsi.a 00:04:55.115 SO libspdk_vhost.so.8.0 00:04:55.115 SO libspdk_iscsi.so.8.0 00:04:55.115 SYMLINK libspdk_vhost.so 00:04:55.115 SYMLINK libspdk_ftl.so 00:04:55.115 SYMLINK libspdk_iscsi.so 00:04:55.685 CC module/env_dpdk/env_dpdk_rpc.o 00:04:55.685 CC module/accel/error/accel_error.o 00:04:55.685 CC module/fsdev/aio/fsdev_aio.o 00:04:55.685 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:55.685 CC module/accel/error/accel_error_rpc.o 00:04:55.685 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:55.685 CC module/sock/posix/posix.o 00:04:55.685 CC module/fsdev/aio/linux_aio_mgr.o 00:04:55.685 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:55.685 CC module/accel/iaa/accel_iaa.o 00:04:55.685 CC module/blob/bdev/blob_bdev.o 00:04:55.685 CC module/keyring/file/keyring.o 00:04:55.685 CC module/accel/dsa/accel_dsa.o 00:04:55.685 CC module/scheduler/gscheduler/gscheduler.o 00:04:55.685 CC module/keyring/linux/keyring.o 00:04:55.685 CC module/accel/ioat/accel_ioat.o 00:04:55.685 LIB libspdk_env_dpdk_rpc.a 00:04:55.685 SO libspdk_env_dpdk_rpc.so.6.0 00:04:55.685 SYMLINK libspdk_env_dpdk_rpc.so 00:04:55.685 CC module/keyring/linux/keyring_rpc.o 00:04:55.685 CC module/accel/ioat/accel_ioat_rpc.o 00:04:55.685 CC module/accel/iaa/accel_iaa_rpc.o 00:04:55.685 CC module/keyring/file/keyring_rpc.o 00:04:55.685 CC module/accel/dsa/accel_dsa_rpc.o 00:04:55.944 LIB libspdk_scheduler_dpdk_governor.a 00:04:55.944 LIB libspdk_scheduler_gscheduler.a 00:04:55.944 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:55.944 SO libspdk_scheduler_gscheduler.so.4.0 00:04:55.944 LIB libspdk_scheduler_dynamic.a 00:04:55.944 LIB libspdk_accel_error.a 00:04:55.944 SO libspdk_scheduler_dynamic.so.4.0 00:04:55.944 SO libspdk_accel_error.so.2.0 00:04:55.944 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:55.944 SYMLINK libspdk_scheduler_gscheduler.so 00:04:55.944 SYMLINK libspdk_scheduler_dynamic.so 00:04:55.944 LIB libspdk_blob_bdev.a 00:04:55.944 SYMLINK libspdk_accel_error.so 00:04:55.944 LIB libspdk_keyring_linux.a 00:04:55.944 LIB libspdk_accel_ioat.a 00:04:55.944 SO libspdk_blob_bdev.so.11.0 00:04:55.944 LIB libspdk_accel_iaa.a 00:04:55.944 LIB libspdk_keyring_file.a 00:04:55.944 LIB libspdk_accel_dsa.a 00:04:55.944 SO libspdk_keyring_linux.so.1.0 00:04:55.944 SO libspdk_accel_ioat.so.6.0 00:04:55.944 SO libspdk_keyring_file.so.2.0 00:04:55.944 SO libspdk_accel_iaa.so.3.0 00:04:55.944 SO libspdk_accel_dsa.so.5.0 00:04:55.944 SYMLINK libspdk_blob_bdev.so 00:04:55.944 SYMLINK libspdk_keyring_linux.so 00:04:55.944 SYMLINK libspdk_accel_ioat.so 00:04:55.944 SYMLINK libspdk_keyring_file.so 00:04:55.944 SYMLINK libspdk_accel_iaa.so 00:04:55.944 SYMLINK libspdk_accel_dsa.so 00:04:56.205 CC module/bdev/error/vbdev_error.o 00:04:56.205 CC module/bdev/gpt/gpt.o 00:04:56.205 CC module/bdev/delay/vbdev_delay.o 00:04:56.205 CC module/bdev/nvme/bdev_nvme.o 00:04:56.205 CC module/bdev/null/bdev_null.o 00:04:56.205 CC module/bdev/raid/bdev_raid.o 00:04:56.205 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:56.205 CC module/bdev/malloc/bdev_malloc.o 00:04:56.205 CC module/bdev/split/vbdev_split.o 00:04:56.205 CC module/bdev/ftl/bdev_ftl.o 00:04:56.205 CC module/bdev/lvol/vbdev_lvol.o 00:04:56.205 CC module/blobfs/bdev/blobfs_bdev.o 00:04:56.205 CC module/bdev/passthru/vbdev_passthru.o 00:04:56.464 CC module/bdev/aio/bdev_aio.o 00:04:56.464 LIB libspdk_fsdev_aio.a 00:04:56.464 SO libspdk_fsdev_aio.so.1.0 00:04:56.464 LIB libspdk_sock_posix.a 00:04:56.464 SO libspdk_sock_posix.so.6.0 00:04:56.464 SYMLINK libspdk_fsdev_aio.so 00:04:56.464 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:56.464 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:56.464 CC module/bdev/gpt/vbdev_gpt.o 00:04:56.725 SYMLINK libspdk_sock_posix.so 00:04:56.725 CC module/bdev/split/vbdev_split_rpc.o 00:04:56.725 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.725 CC module/bdev/error/vbdev_error_rpc.o 00:04:56.725 CC module/bdev/null/bdev_null_rpc.o 00:04:56.725 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:56.725 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:56.725 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:56.725 CC module/bdev/aio/bdev_aio_rpc.o 00:04:56.725 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:56.725 LIB libspdk_blobfs_bdev.a 00:04:56.725 LIB libspdk_bdev_delay.a 00:04:56.725 SO libspdk_blobfs_bdev.so.6.0 00:04:56.725 SO libspdk_bdev_delay.so.6.0 00:04:56.725 LIB libspdk_bdev_split.a 00:04:56.725 LIB libspdk_bdev_malloc.a 00:04:56.985 SO libspdk_bdev_split.so.6.0 00:04:56.985 LIB libspdk_bdev_error.a 00:04:56.985 SO libspdk_bdev_malloc.so.6.0 00:04:56.985 LIB libspdk_bdev_null.a 00:04:56.985 SO libspdk_bdev_error.so.6.0 00:04:56.985 SYMLINK libspdk_blobfs_bdev.so 00:04:56.985 SYMLINK libspdk_bdev_delay.so 00:04:56.985 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:56.985 SO libspdk_bdev_null.so.6.0 00:04:56.985 LIB libspdk_bdev_passthru.a 00:04:56.985 LIB libspdk_bdev_gpt.a 00:04:56.985 CC module/bdev/raid/bdev_raid_rpc.o 00:04:56.985 SYMLINK libspdk_bdev_split.so 00:04:56.985 SYMLINK libspdk_bdev_malloc.so 00:04:56.985 SO libspdk_bdev_gpt.so.6.0 00:04:56.985 LIB libspdk_bdev_zone_block.a 00:04:56.985 SO libspdk_bdev_passthru.so.6.0 00:04:56.985 CC module/bdev/raid/bdev_raid_sb.o 00:04:56.985 SYMLINK libspdk_bdev_error.so 00:04:56.985 CC module/bdev/nvme/nvme_rpc.o 00:04:56.985 CC module/bdev/raid/raid0.o 00:04:56.985 CC module/bdev/iscsi/bdev_iscsi.o 00:04:56.985 LIB libspdk_bdev_ftl.a 00:04:56.985 SO libspdk_bdev_zone_block.so.6.0 00:04:56.985 SYMLINK libspdk_bdev_null.so 00:04:56.985 LIB libspdk_bdev_aio.a 00:04:56.985 SO libspdk_bdev_ftl.so.6.0 00:04:56.985 CC module/bdev/raid/raid1.o 00:04:56.985 SO libspdk_bdev_aio.so.6.0 00:04:56.985 SYMLINK libspdk_bdev_gpt.so 00:04:56.985 CC module/bdev/nvme/bdev_mdns_client.o 00:04:56.985 SYMLINK libspdk_bdev_passthru.so 00:04:56.985 CC module/bdev/nvme/vbdev_opal.o 00:04:56.985 SYMLINK libspdk_bdev_zone_block.so 00:04:56.985 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:56.985 SYMLINK libspdk_bdev_ftl.so 00:04:56.985 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:56.985 SYMLINK libspdk_bdev_aio.so 00:04:56.985 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:57.244 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:57.244 CC module/bdev/raid/concat.o 00:04:57.244 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:57.244 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:57.244 LIB libspdk_bdev_lvol.a 00:04:57.502 SO libspdk_bdev_lvol.so.6.0 00:04:57.502 LIB libspdk_bdev_iscsi.a 00:04:57.502 SYMLINK libspdk_bdev_lvol.so 00:04:57.502 SO libspdk_bdev_iscsi.so.6.0 00:04:57.502 SYMLINK libspdk_bdev_iscsi.so 00:04:57.502 LIB libspdk_bdev_raid.a 00:04:57.502 SO libspdk_bdev_raid.so.6.0 00:04:57.760 SYMLINK libspdk_bdev_raid.so 00:04:57.760 LIB libspdk_bdev_virtio.a 00:04:57.760 SO libspdk_bdev_virtio.so.6.0 00:04:57.760 SYMLINK libspdk_bdev_virtio.so 00:04:59.664 LIB libspdk_bdev_nvme.a 00:04:59.664 SO libspdk_bdev_nvme.so.7.1 00:04:59.664 SYMLINK libspdk_bdev_nvme.so 00:05:00.231 CC module/event/subsystems/iobuf/iobuf.o 00:05:00.231 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:00.231 CC module/event/subsystems/vmd/vmd.o 00:05:00.231 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:00.231 CC module/event/subsystems/sock/sock.o 00:05:00.231 CC module/event/subsystems/scheduler/scheduler.o 00:05:00.231 CC module/event/subsystems/keyring/keyring.o 00:05:00.231 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:00.231 CC module/event/subsystems/fsdev/fsdev.o 00:05:00.231 LIB libspdk_event_keyring.a 00:05:00.231 LIB libspdk_event_vhost_blk.a 00:05:00.231 LIB libspdk_event_fsdev.a 00:05:00.231 LIB libspdk_event_vmd.a 00:05:00.231 LIB libspdk_event_scheduler.a 00:05:00.231 LIB libspdk_event_sock.a 00:05:00.231 SO libspdk_event_keyring.so.1.0 00:05:00.231 LIB libspdk_event_iobuf.a 00:05:00.231 SO libspdk_event_vhost_blk.so.3.0 00:05:00.231 SO libspdk_event_fsdev.so.1.0 00:05:00.231 SO libspdk_event_scheduler.so.4.0 00:05:00.231 SO libspdk_event_vmd.so.6.0 00:05:00.231 SO libspdk_event_sock.so.5.0 00:05:00.491 SO libspdk_event_iobuf.so.3.0 00:05:00.491 SYMLINK libspdk_event_keyring.so 00:05:00.491 SYMLINK libspdk_event_vhost_blk.so 00:05:00.491 SYMLINK libspdk_event_fsdev.so 00:05:00.491 SYMLINK libspdk_event_scheduler.so 00:05:00.491 SYMLINK libspdk_event_sock.so 00:05:00.491 SYMLINK libspdk_event_vmd.so 00:05:00.491 SYMLINK libspdk_event_iobuf.so 00:05:00.751 CC module/event/subsystems/accel/accel.o 00:05:00.751 LIB libspdk_event_accel.a 00:05:00.751 SO libspdk_event_accel.so.6.0 00:05:01.011 SYMLINK libspdk_event_accel.so 00:05:01.011 CC module/event/subsystems/bdev/bdev.o 00:05:01.272 LIB libspdk_event_bdev.a 00:05:01.272 SO libspdk_event_bdev.so.6.0 00:05:01.529 SYMLINK libspdk_event_bdev.so 00:05:01.529 CC module/event/subsystems/nbd/nbd.o 00:05:01.529 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:01.529 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:01.529 CC module/event/subsystems/ublk/ublk.o 00:05:01.529 CC module/event/subsystems/scsi/scsi.o 00:05:01.788 LIB libspdk_event_nbd.a 00:05:01.788 LIB libspdk_event_ublk.a 00:05:01.788 SO libspdk_event_nbd.so.6.0 00:05:01.788 LIB libspdk_event_scsi.a 00:05:01.788 SO libspdk_event_ublk.so.3.0 00:05:01.788 SO libspdk_event_scsi.so.6.0 00:05:01.788 SYMLINK libspdk_event_nbd.so 00:05:01.788 SYMLINK libspdk_event_ublk.so 00:05:01.788 SYMLINK libspdk_event_scsi.so 00:05:01.788 LIB libspdk_event_nvmf.a 00:05:01.788 SO libspdk_event_nvmf.so.6.0 00:05:02.046 SYMLINK libspdk_event_nvmf.so 00:05:02.046 CC module/event/subsystems/iscsi/iscsi.o 00:05:02.046 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:02.305 LIB libspdk_event_vhost_scsi.a 00:05:02.305 LIB libspdk_event_iscsi.a 00:05:02.305 SO libspdk_event_vhost_scsi.so.3.0 00:05:02.305 SO libspdk_event_iscsi.so.6.0 00:05:02.305 SYMLINK libspdk_event_vhost_scsi.so 00:05:02.305 SYMLINK libspdk_event_iscsi.so 00:05:02.567 SO libspdk.so.6.0 00:05:02.567 SYMLINK libspdk.so 00:05:02.567 CXX app/trace/trace.o 00:05:02.567 CC app/trace_record/trace_record.o 00:05:02.567 CC app/spdk_lspci/spdk_lspci.o 00:05:02.567 CC test/rpc_client/rpc_client_test.o 00:05:02.567 TEST_HEADER include/spdk/accel.h 00:05:02.567 TEST_HEADER include/spdk/accel_module.h 00:05:02.567 CC app/spdk_nvme_perf/perf.o 00:05:02.831 TEST_HEADER include/spdk/assert.h 00:05:02.831 CC app/spdk_nvme_identify/identify.o 00:05:02.831 TEST_HEADER include/spdk/barrier.h 00:05:02.831 TEST_HEADER include/spdk/base64.h 00:05:02.831 TEST_HEADER include/spdk/bdev.h 00:05:02.831 TEST_HEADER include/spdk/bdev_module.h 00:05:02.831 TEST_HEADER include/spdk/bdev_zone.h 00:05:02.831 TEST_HEADER include/spdk/bit_array.h 00:05:02.831 TEST_HEADER include/spdk/bit_pool.h 00:05:02.831 TEST_HEADER include/spdk/blob_bdev.h 00:05:02.831 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:02.831 TEST_HEADER include/spdk/blobfs.h 00:05:02.831 TEST_HEADER include/spdk/blob.h 00:05:02.831 TEST_HEADER include/spdk/conf.h 00:05:02.831 TEST_HEADER include/spdk/config.h 00:05:02.831 TEST_HEADER include/spdk/cpuset.h 00:05:02.831 TEST_HEADER include/spdk/crc16.h 00:05:02.831 CC app/iscsi_tgt/iscsi_tgt.o 00:05:02.831 TEST_HEADER include/spdk/crc32.h 00:05:02.831 TEST_HEADER include/spdk/crc64.h 00:05:02.831 CC app/nvmf_tgt/nvmf_main.o 00:05:02.831 TEST_HEADER include/spdk/dif.h 00:05:02.831 TEST_HEADER include/spdk/dma.h 00:05:02.831 TEST_HEADER include/spdk/endian.h 00:05:02.831 TEST_HEADER include/spdk/env_dpdk.h 00:05:02.831 TEST_HEADER include/spdk/env.h 00:05:02.831 TEST_HEADER include/spdk/event.h 00:05:02.831 TEST_HEADER include/spdk/fd_group.h 00:05:02.831 TEST_HEADER include/spdk/fd.h 00:05:02.831 TEST_HEADER include/spdk/file.h 00:05:02.831 TEST_HEADER include/spdk/fsdev.h 00:05:02.831 TEST_HEADER include/spdk/fsdev_module.h 00:05:02.831 CC app/spdk_tgt/spdk_tgt.o 00:05:02.831 TEST_HEADER include/spdk/ftl.h 00:05:02.831 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:02.831 TEST_HEADER include/spdk/gpt_spec.h 00:05:02.831 TEST_HEADER include/spdk/hexlify.h 00:05:02.831 TEST_HEADER include/spdk/histogram_data.h 00:05:02.831 CC test/thread/poller_perf/poller_perf.o 00:05:02.831 TEST_HEADER include/spdk/idxd.h 00:05:02.831 CC examples/util/zipf/zipf.o 00:05:02.831 CC examples/ioat/perf/perf.o 00:05:02.831 TEST_HEADER include/spdk/idxd_spec.h 00:05:02.831 TEST_HEADER include/spdk/init.h 00:05:02.831 TEST_HEADER include/spdk/ioat.h 00:05:02.831 TEST_HEADER include/spdk/ioat_spec.h 00:05:02.831 TEST_HEADER include/spdk/iscsi_spec.h 00:05:02.831 TEST_HEADER include/spdk/json.h 00:05:02.831 TEST_HEADER include/spdk/jsonrpc.h 00:05:02.831 TEST_HEADER include/spdk/keyring.h 00:05:02.831 TEST_HEADER include/spdk/keyring_module.h 00:05:02.831 TEST_HEADER include/spdk/likely.h 00:05:02.831 TEST_HEADER include/spdk/log.h 00:05:02.831 TEST_HEADER include/spdk/lvol.h 00:05:02.831 TEST_HEADER include/spdk/md5.h 00:05:02.831 TEST_HEADER include/spdk/memory.h 00:05:02.831 TEST_HEADER include/spdk/mmio.h 00:05:02.831 TEST_HEADER include/spdk/nbd.h 00:05:02.831 TEST_HEADER include/spdk/net.h 00:05:02.831 TEST_HEADER include/spdk/notify.h 00:05:02.831 CC test/dma/test_dma/test_dma.o 00:05:02.831 TEST_HEADER include/spdk/nvme.h 00:05:02.831 TEST_HEADER include/spdk/nvme_intel.h 00:05:02.831 CC test/app/bdev_svc/bdev_svc.o 00:05:02.831 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:02.831 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:02.831 TEST_HEADER include/spdk/nvme_spec.h 00:05:02.831 TEST_HEADER include/spdk/nvme_zns.h 00:05:02.832 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:02.832 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:02.832 TEST_HEADER include/spdk/nvmf.h 00:05:02.832 LINK spdk_lspci 00:05:02.832 TEST_HEADER include/spdk/nvmf_spec.h 00:05:02.832 TEST_HEADER include/spdk/nvmf_transport.h 00:05:02.832 TEST_HEADER include/spdk/opal.h 00:05:02.832 TEST_HEADER include/spdk/opal_spec.h 00:05:02.832 TEST_HEADER include/spdk/pci_ids.h 00:05:02.832 TEST_HEADER include/spdk/pipe.h 00:05:02.832 CC test/env/mem_callbacks/mem_callbacks.o 00:05:02.832 TEST_HEADER include/spdk/queue.h 00:05:02.832 TEST_HEADER include/spdk/reduce.h 00:05:02.832 TEST_HEADER include/spdk/rpc.h 00:05:02.832 TEST_HEADER include/spdk/scheduler.h 00:05:02.832 TEST_HEADER include/spdk/scsi.h 00:05:02.832 TEST_HEADER include/spdk/scsi_spec.h 00:05:02.832 TEST_HEADER include/spdk/sock.h 00:05:02.832 TEST_HEADER include/spdk/stdinc.h 00:05:02.832 TEST_HEADER include/spdk/string.h 00:05:02.832 TEST_HEADER include/spdk/thread.h 00:05:02.832 TEST_HEADER include/spdk/trace.h 00:05:02.832 TEST_HEADER include/spdk/trace_parser.h 00:05:02.832 TEST_HEADER include/spdk/tree.h 00:05:02.832 TEST_HEADER include/spdk/ublk.h 00:05:02.832 TEST_HEADER include/spdk/util.h 00:05:02.832 TEST_HEADER include/spdk/uuid.h 00:05:02.832 TEST_HEADER include/spdk/version.h 00:05:02.832 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:02.832 LINK rpc_client_test 00:05:02.832 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:02.832 TEST_HEADER include/spdk/vhost.h 00:05:02.832 TEST_HEADER include/spdk/vmd.h 00:05:02.832 TEST_HEADER include/spdk/xor.h 00:05:02.832 TEST_HEADER include/spdk/zipf.h 00:05:02.832 CXX test/cpp_headers/accel.o 00:05:03.093 LINK poller_perf 00:05:03.093 LINK nvmf_tgt 00:05:03.093 LINK zipf 00:05:03.093 LINK spdk_trace_record 00:05:03.093 LINK iscsi_tgt 00:05:03.093 LINK spdk_tgt 00:05:03.093 LINK ioat_perf 00:05:03.093 LINK bdev_svc 00:05:03.093 CC examples/ioat/verify/verify.o 00:05:03.093 CXX test/cpp_headers/accel_module.o 00:05:03.354 LINK spdk_trace 00:05:03.354 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:03.354 CC test/env/vtophys/vtophys.o 00:05:03.354 CXX test/cpp_headers/assert.o 00:05:03.354 CC app/spdk_nvme_discover/discovery_aer.o 00:05:03.354 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:03.354 CC test/event/event_perf/event_perf.o 00:05:03.354 CC test/env/memory/memory_ut.o 00:05:03.354 LINK verify 00:05:03.354 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:03.354 LINK vtophys 00:05:03.621 LINK test_dma 00:05:03.621 CC test/event/reactor/reactor.o 00:05:03.621 CC examples/thread/thread/thread_ex.o 00:05:03.621 LINK interrupt_tgt 00:05:03.621 CXX test/cpp_headers/barrier.o 00:05:03.621 LINK env_dpdk_post_init 00:05:03.621 LINK spdk_nvme_discover 00:05:03.621 LINK event_perf 00:05:03.621 CC examples/sock/hello_world/hello_sock.o 00:05:03.621 LINK spdk_nvme_perf 00:05:03.621 LINK reactor 00:05:03.621 LINK spdk_nvme_identify 00:05:03.621 CC test/app/histogram_perf/histogram_perf.o 00:05:03.621 CXX test/cpp_headers/base64.o 00:05:03.885 CXX test/cpp_headers/bdev.o 00:05:03.885 CC test/app/jsoncat/jsoncat.o 00:05:03.885 LINK mem_callbacks 00:05:03.885 CC test/event/reactor_perf/reactor_perf.o 00:05:03.885 LINK thread 00:05:03.885 CC test/app/stub/stub.o 00:05:03.885 LINK histogram_perf 00:05:03.885 CC examples/idxd/perf/perf.o 00:05:03.885 LINK jsoncat 00:05:03.885 LINK hello_sock 00:05:03.885 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.885 LINK nvme_fuzz 00:05:03.885 CC test/event/app_repeat/app_repeat.o 00:05:03.885 CC examples/vmd/led/led.o 00:05:03.885 CC app/spdk_top/spdk_top.o 00:05:04.144 CC test/env/pci/pci_ut.o 00:05:04.144 CXX test/cpp_headers/bdev_module.o 00:05:04.144 LINK reactor_perf 00:05:04.144 LINK stub 00:05:04.144 CC test/event/scheduler/scheduler.o 00:05:04.144 LINK lsvmd 00:05:04.144 CXX test/cpp_headers/bdev_zone.o 00:05:04.144 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:04.144 LINK led 00:05:04.144 LINK app_repeat 00:05:04.144 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:04.403 CC app/vhost/vhost.o 00:05:04.403 CXX test/cpp_headers/bit_array.o 00:05:04.403 CC examples/accel/perf/accel_perf.o 00:05:04.403 LINK idxd_perf 00:05:04.403 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:04.403 LINK scheduler 00:05:04.403 CC app/spdk_dd/spdk_dd.o 00:05:04.403 CC test/accel/dif/dif.o 00:05:04.403 LINK pci_ut 00:05:04.667 CC app/fio/nvme/fio_plugin.o 00:05:04.667 LINK vhost 00:05:04.667 CXX test/cpp_headers/bit_pool.o 00:05:04.667 CC examples/nvme/hello_world/hello_world.o 00:05:04.667 CC examples/blob/hello_world/hello_blob.o 00:05:04.667 CC test/blobfs/mkfs/mkfs.o 00:05:04.667 CC examples/nvme/reconnect/reconnect.o 00:05:04.928 CXX test/cpp_headers/blob_bdev.o 00:05:04.928 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:04.928 LINK vhost_fuzz 00:05:04.928 CC app/fio/bdev/fio_plugin.o 00:05:04.928 LINK spdk_dd 00:05:04.928 LINK accel_perf 00:05:04.928 LINK hello_world 00:05:04.928 LINK mkfs 00:05:04.928 CC examples/blob/cli/blobcli.o 00:05:04.928 LINK memory_ut 00:05:04.928 LINK hello_blob 00:05:05.189 CXX test/cpp_headers/blobfs_bdev.o 00:05:05.189 LINK spdk_top 00:05:05.189 CXX test/cpp_headers/blobfs.o 00:05:05.189 CXX test/cpp_headers/blob.o 00:05:05.189 LINK hello_fsdev 00:05:05.189 LINK spdk_nvme 00:05:05.189 LINK reconnect 00:05:05.189 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:05.449 LINK dif 00:05:05.449 CC examples/nvme/arbitration/arbitration.o 00:05:05.449 CC examples/nvme/hotplug/hotplug.o 00:05:05.449 CXX test/cpp_headers/conf.o 00:05:05.449 CC test/lvol/esnap/esnap.o 00:05:05.449 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:05.449 CC examples/bdev/hello_world/hello_bdev.o 00:05:05.449 CXX test/cpp_headers/config.o 00:05:05.449 CXX test/cpp_headers/cpuset.o 00:05:05.449 CC examples/bdev/bdevperf/bdevperf.o 00:05:05.449 CC examples/nvme/abort/abort.o 00:05:05.449 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:05.449 LINK spdk_bdev 00:05:05.707 CC test/nvme/aer/aer.o 00:05:05.707 LINK blobcli 00:05:05.707 LINK cmb_copy 00:05:05.707 CC test/nvme/reset/reset.o 00:05:05.707 LINK hotplug 00:05:05.707 CXX test/cpp_headers/crc16.o 00:05:05.707 CC test/nvme/sgl/sgl.o 00:05:05.707 LINK hello_bdev 00:05:05.707 LINK arbitration 00:05:05.707 LINK pmr_persistence 00:05:06.114 CXX test/cpp_headers/crc32.o 00:05:06.114 LINK nvme_manage 00:05:06.114 CC test/nvme/e2edp/nvme_dp.o 00:05:06.114 CXX test/cpp_headers/crc64.o 00:05:06.114 CC test/bdev/bdevio/bdevio.o 00:05:06.114 LINK aer 00:05:06.114 CC test/nvme/overhead/overhead.o 00:05:06.114 LINK reset 00:05:06.114 LINK abort 00:05:06.114 CC test/nvme/err_injection/err_injection.o 00:05:06.114 LINK sgl 00:05:06.114 CC test/nvme/reserve/reserve.o 00:05:06.114 CC test/nvme/startup/startup.o 00:05:06.114 CXX test/cpp_headers/dif.o 00:05:06.114 CXX test/cpp_headers/dma.o 00:05:06.114 CXX test/cpp_headers/endian.o 00:05:06.114 CXX test/cpp_headers/env_dpdk.o 00:05:06.114 CXX test/cpp_headers/env.o 00:05:06.384 LINK iscsi_fuzz 00:05:06.384 CXX test/cpp_headers/event.o 00:05:06.384 LINK nvme_dp 00:05:06.384 LINK startup 00:05:06.384 LINK err_injection 00:05:06.384 LINK reserve 00:05:06.384 CXX test/cpp_headers/fd_group.o 00:05:06.384 CC test/nvme/simple_copy/simple_copy.o 00:05:06.384 LINK overhead 00:05:06.384 CC test/nvme/connect_stress/connect_stress.o 00:05:06.384 LINK bdevio 00:05:06.384 LINK bdevperf 00:05:06.384 CXX test/cpp_headers/fd.o 00:05:06.384 CC test/nvme/boot_partition/boot_partition.o 00:05:06.646 CXX test/cpp_headers/file.o 00:05:06.646 CXX test/cpp_headers/fsdev.o 00:05:06.646 CC test/nvme/compliance/nvme_compliance.o 00:05:06.646 CC test/nvme/fused_ordering/fused_ordering.o 00:05:06.646 CXX test/cpp_headers/fsdev_module.o 00:05:06.647 CXX test/cpp_headers/ftl.o 00:05:06.647 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.647 CC test/nvme/fdp/fdp.o 00:05:06.647 CC test/nvme/cuse/cuse.o 00:05:06.647 LINK simple_copy 00:05:06.647 CXX test/cpp_headers/fuse_dispatcher.o 00:05:06.647 LINK connect_stress 00:05:06.647 CXX test/cpp_headers/gpt_spec.o 00:05:06.647 LINK boot_partition 00:05:06.647 CXX test/cpp_headers/hexlify.o 00:05:06.908 CXX test/cpp_headers/histogram_data.o 00:05:06.908 CXX test/cpp_headers/idxd.o 00:05:06.908 LINK fused_ordering 00:05:06.908 CXX test/cpp_headers/idxd_spec.o 00:05:06.908 LINK doorbell_aers 00:05:06.908 CXX test/cpp_headers/init.o 00:05:06.908 CXX test/cpp_headers/ioat.o 00:05:06.908 CXX test/cpp_headers/ioat_spec.o 00:05:06.908 CXX test/cpp_headers/iscsi_spec.o 00:05:06.908 LINK nvme_compliance 00:05:06.908 CXX test/cpp_headers/json.o 00:05:06.908 CC examples/nvmf/nvmf/nvmf.o 00:05:06.908 CXX test/cpp_headers/jsonrpc.o 00:05:06.908 CXX test/cpp_headers/keyring.o 00:05:06.909 CXX test/cpp_headers/keyring_module.o 00:05:06.909 LINK fdp 00:05:06.909 CXX test/cpp_headers/likely.o 00:05:07.169 CXX test/cpp_headers/log.o 00:05:07.169 CXX test/cpp_headers/lvol.o 00:05:07.169 CXX test/cpp_headers/md5.o 00:05:07.169 CXX test/cpp_headers/memory.o 00:05:07.169 CXX test/cpp_headers/mmio.o 00:05:07.169 CXX test/cpp_headers/nbd.o 00:05:07.169 CXX test/cpp_headers/net.o 00:05:07.169 CXX test/cpp_headers/notify.o 00:05:07.169 CXX test/cpp_headers/nvme.o 00:05:07.169 CXX test/cpp_headers/nvme_intel.o 00:05:07.169 CXX test/cpp_headers/nvme_ocssd.o 00:05:07.169 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:07.169 CXX test/cpp_headers/nvme_spec.o 00:05:07.169 CXX test/cpp_headers/nvme_zns.o 00:05:07.427 CXX test/cpp_headers/nvmf_cmd.o 00:05:07.427 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:07.427 CXX test/cpp_headers/nvmf.o 00:05:07.427 LINK nvmf 00:05:07.427 CXX test/cpp_headers/nvmf_spec.o 00:05:07.427 CXX test/cpp_headers/nvmf_transport.o 00:05:07.427 CXX test/cpp_headers/opal.o 00:05:07.427 CXX test/cpp_headers/opal_spec.o 00:05:07.427 CXX test/cpp_headers/pci_ids.o 00:05:07.427 CXX test/cpp_headers/pipe.o 00:05:07.427 CXX test/cpp_headers/queue.o 00:05:07.427 CXX test/cpp_headers/reduce.o 00:05:07.427 CXX test/cpp_headers/rpc.o 00:05:07.427 CXX test/cpp_headers/scheduler.o 00:05:07.427 CXX test/cpp_headers/scsi.o 00:05:07.688 CXX test/cpp_headers/scsi_spec.o 00:05:07.688 CXX test/cpp_headers/sock.o 00:05:07.688 CXX test/cpp_headers/stdinc.o 00:05:07.688 CXX test/cpp_headers/string.o 00:05:07.688 CXX test/cpp_headers/thread.o 00:05:07.688 CXX test/cpp_headers/trace.o 00:05:07.688 CXX test/cpp_headers/trace_parser.o 00:05:07.688 CXX test/cpp_headers/tree.o 00:05:07.688 CXX test/cpp_headers/ublk.o 00:05:07.688 CXX test/cpp_headers/util.o 00:05:07.688 CXX test/cpp_headers/uuid.o 00:05:07.688 CXX test/cpp_headers/version.o 00:05:07.688 CXX test/cpp_headers/vfio_user_pci.o 00:05:07.688 CXX test/cpp_headers/vfio_user_spec.o 00:05:07.688 CXX test/cpp_headers/vhost.o 00:05:07.947 CXX test/cpp_headers/vmd.o 00:05:07.947 CXX test/cpp_headers/xor.o 00:05:07.947 CXX test/cpp_headers/zipf.o 00:05:08.515 LINK cuse 00:05:11.803 LINK esnap 00:05:12.093 00:05:12.093 real 1m27.210s 00:05:12.093 user 10m11.164s 00:05:12.093 sys 1m52.516s 00:05:12.093 12:19:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:12.093 12:19:17 make -- common/autotest_common.sh@10 -- $ set +x 00:05:12.093 ************************************ 00:05:12.093 END TEST make 00:05:12.093 ************************************ 00:05:12.093 12:19:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:12.093 12:19:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:12.093 12:19:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:12.093 12:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.093 12:19:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:12.093 12:19:17 -- pm/common@44 -- $ pid=2619090 00:05:12.093 12:19:17 -- pm/common@50 -- $ kill -TERM 2619090 00:05:12.093 12:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.093 12:19:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:12.093 12:19:17 -- pm/common@44 -- $ pid=2619092 00:05:12.093 12:19:17 -- pm/common@50 -- $ kill -TERM 2619092 00:05:12.093 12:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.093 12:19:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:12.093 12:19:17 -- pm/common@44 -- $ pid=2619094 00:05:12.093 12:19:17 -- pm/common@50 -- $ kill -TERM 2619094 00:05:12.093 12:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.093 12:19:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:12.093 12:19:17 -- pm/common@46 -- $ continue 00:05:12.093 12:19:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:12.093 12:19:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:05:12.093 12:19:17 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.093 12:19:17 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.093 12:19:17 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.093 12:19:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.093 12:19:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.093 12:19:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.093 12:19:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.093 12:19:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.093 12:19:17 -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.093 12:19:17 -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.093 12:19:17 -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.093 12:19:17 -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.093 12:19:17 -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.093 12:19:17 -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.093 12:19:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.093 12:19:17 -- scripts/common.sh@344 -- # case "$op" in 00:05:12.093 12:19:17 -- scripts/common.sh@345 -- # : 1 00:05:12.093 12:19:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.093 12:19:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.093 12:19:17 -- scripts/common.sh@365 -- # decimal 1 00:05:12.093 12:19:17 -- scripts/common.sh@353 -- # local d=1 00:05:12.093 12:19:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.093 12:19:17 -- scripts/common.sh@355 -- # echo 1 00:05:12.093 12:19:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.093 12:19:17 -- scripts/common.sh@366 -- # decimal 2 00:05:12.093 12:19:17 -- scripts/common.sh@353 -- # local d=2 00:05:12.093 12:19:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.093 12:19:17 -- scripts/common.sh@355 -- # echo 2 00:05:12.093 12:19:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.093 12:19:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.093 12:19:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.093 12:19:17 -- scripts/common.sh@368 -- # return 0 00:05:12.093 12:19:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.093 12:19:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.093 --rc genhtml_branch_coverage=1 00:05:12.093 --rc genhtml_function_coverage=1 00:05:12.093 --rc genhtml_legend=1 00:05:12.093 --rc geninfo_all_blocks=1 00:05:12.093 --rc geninfo_unexecuted_blocks=1 00:05:12.093 00:05:12.093 ' 00:05:12.093 12:19:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.093 --rc genhtml_branch_coverage=1 00:05:12.093 --rc genhtml_function_coverage=1 00:05:12.093 --rc genhtml_legend=1 00:05:12.093 --rc geninfo_all_blocks=1 00:05:12.093 --rc geninfo_unexecuted_blocks=1 00:05:12.093 00:05:12.093 ' 00:05:12.093 12:19:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.093 --rc genhtml_branch_coverage=1 00:05:12.093 --rc genhtml_function_coverage=1 00:05:12.093 --rc genhtml_legend=1 00:05:12.093 --rc geninfo_all_blocks=1 00:05:12.093 --rc geninfo_unexecuted_blocks=1 00:05:12.093 00:05:12.093 ' 00:05:12.093 12:19:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.094 --rc genhtml_branch_coverage=1 00:05:12.094 --rc genhtml_function_coverage=1 00:05:12.094 --rc genhtml_legend=1 00:05:12.094 --rc geninfo_all_blocks=1 00:05:12.094 --rc geninfo_unexecuted_blocks=1 00:05:12.094 00:05:12.094 ' 00:05:12.094 12:19:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.094 12:19:17 -- nvmf/common.sh@7 -- # uname -s 00:05:12.094 12:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.094 12:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.094 12:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.094 12:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.094 12:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.094 12:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.094 12:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.094 12:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.094 12:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.353 12:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.353 12:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:05:12.353 12:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:05:12.353 12:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.353 12:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.353 12:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.353 12:19:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.353 12:19:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:12.353 12:19:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.353 12:19:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.353 12:19:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.353 12:19:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.353 12:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.353 12:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.353 12:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.353 12:19:17 -- paths/export.sh@5 -- # export PATH 00:05:12.353 12:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.353 12:19:17 -- nvmf/common.sh@51 -- # : 0 00:05:12.353 12:19:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.353 12:19:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.353 12:19:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.353 12:19:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.353 12:19:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.353 12:19:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.353 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.353 12:19:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.353 12:19:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.353 12:19:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.353 12:19:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.353 12:19:17 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.353 12:19:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.353 12:19:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:12.353 12:19:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:05:12.353 12:19:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.353 12:19:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:05:12.353 12:19:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:12.353 12:19:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:12.353 12:19:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:12.353 12:19:17 -- spdk/autotest.sh@48 -- # udevadm_pid=2677210 00:05:12.353 12:19:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:12.353 12:19:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:12.353 12:19:17 -- pm/common@17 -- # local monitor 00:05:12.353 12:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.353 12:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.353 12:19:17 -- pm/common@21 -- # date +%s 00:05:12.353 12:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.353 12:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.353 12:19:17 -- pm/common@21 -- # date +%s 00:05:12.353 12:19:17 -- pm/common@25 -- # sleep 1 00:05:12.353 12:19:17 -- pm/common@21 -- # date +%s 00:05:12.353 12:19:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101557 00:05:12.353 12:19:17 -- pm/common@21 -- # date +%s 00:05:12.354 12:19:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101557 00:05:12.354 12:19:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101557 00:05:12.354 12:19:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101557 00:05:12.354 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101557_collect-vmstat.pm.log 00:05:12.354 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101557_collect-cpu-load.pm.log 00:05:12.354 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101557_collect-cpu-temp.pm.log 00:05:12.354 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101557_collect-bmc-pm.bmc.pm.log 00:05:13.290 12:19:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:13.290 12:19:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:13.290 12:19:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.290 12:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 12:19:18 -- spdk/autotest.sh@59 -- # create_test_list 00:05:13.290 12:19:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:13.290 12:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 12:19:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:05:13.290 12:19:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:13.290 12:19:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:13.290 12:19:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:05:13.290 12:19:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:13.290 12:19:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:13.290 12:19:18 -- common/autotest_common.sh@1457 -- # uname 00:05:13.290 12:19:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:13.290 12:19:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:13.290 12:19:18 -- common/autotest_common.sh@1477 -- # uname 00:05:13.290 12:19:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:13.290 12:19:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:13.290 12:19:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:13.563 lcov: LCOV version 1.15 00:05:13.563 12:19:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:05:40.103 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:40.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:26.785 12:20:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:26.785 12:20:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.785 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 12:20:24 -- spdk/autotest.sh@78 -- # rm -f 00:06:26.785 12:20:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:26.785 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:06:26.785 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:06:26.785 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:06:26.785 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:06:26.785 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:06:26.785 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:06:26.785 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:06:26.785 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:06:26.785 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:06:26.785 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:06:26.785 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:06:26.785 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:06:26.785 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:06:26.785 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:06:26.785 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:06:26.785 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:06:26.785 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:06:26.785 12:20:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:26.785 12:20:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:26.785 12:20:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:26.785 12:20:26 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:26.785 12:20:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:26.785 12:20:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:26.785 12:20:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:26.785 12:20:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:26.785 12:20:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:26.785 12:20:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:26.785 12:20:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:26.785 12:20:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:26.785 12:20:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:26.785 12:20:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:26.785 12:20:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:26.785 No valid GPT data, bailing 00:06:26.785 12:20:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:26.785 12:20:26 -- scripts/common.sh@394 -- # pt= 00:06:26.785 12:20:26 -- scripts/common.sh@395 -- # return 1 00:06:26.785 12:20:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:26.785 1+0 records in 00:06:26.785 1+0 records out 00:06:26.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00179274 s, 585 MB/s 00:06:26.785 12:20:26 -- spdk/autotest.sh@105 -- # sync 00:06:26.785 12:20:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:26.785 12:20:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:26.785 12:20:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:26.785 12:20:28 -- spdk/autotest.sh@111 -- # uname -s 00:06:26.785 12:20:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:26.785 12:20:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:26.785 12:20:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:26.785 Hugepages 00:06:26.785 node hugesize free / total 00:06:26.785 node0 1048576kB 0 / 0 00:06:26.785 node0 2048kB 0 / 0 00:06:26.785 node1 1048576kB 0 / 0 00:06:26.785 node1 2048kB 0 / 0 00:06:26.785 00:06:26.785 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:26.785 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:06:26.785 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:06:26.785 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:06:26.785 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:26.785 12:20:29 -- spdk/autotest.sh@117 -- # uname -s 00:06:26.785 12:20:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:26.785 12:20:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:26.785 12:20:30 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:26.785 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:26.785 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:26.785 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:26.786 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:26.786 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:26.786 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:26.786 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:26.786 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:26.786 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:26.786 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.786 12:20:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:27.723 12:20:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:27.723 12:20:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:27.723 12:20:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:27.723 12:20:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:27.723 12:20:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:27.723 12:20:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:27.723 12:20:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:27.723 12:20:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:27.723 12:20:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:27.723 12:20:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:27.723 12:20:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:27.723 12:20:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:29.102 Waiting for block devices as requested 00:06:29.102 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:06:29.362 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:06:29.362 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:06:29.362 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:06:29.362 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:06:29.621 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:06:29.621 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:06:29.621 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:06:29.621 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:06:29.880 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:06:29.880 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:06:29.880 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:06:30.139 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:06:30.139 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:06:30.139 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:06:30.397 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:06:30.397 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:06:30.397 12:20:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:30.397 12:20:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1487 -- # grep 0000:82:00.0/nvme/nvme 00:06:30.397 12:20:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:06:30.397 12:20:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:30.397 12:20:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:30.397 12:20:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:30.397 12:20:36 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:30.397 12:20:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:30.397 12:20:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:30.397 12:20:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:30.397 12:20:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:30.397 12:20:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:30.397 12:20:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:30.397 12:20:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:30.397 12:20:36 -- common/autotest_common.sh@1543 -- # continue 00:06:30.397 12:20:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:30.397 12:20:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.397 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.397 12:20:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:30.397 12:20:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.397 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.397 12:20:36 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:31.777 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:31.777 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:31.777 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:31.778 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:31.778 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:31.778 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:31.778 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:31.778 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:31.778 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:32.038 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:32.979 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:32.979 12:20:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:32.979 12:20:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.979 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.979 12:20:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:32.979 12:20:38 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:32.979 12:20:38 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:32.979 12:20:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:32.979 12:20:38 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:32.979 12:20:38 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:32.979 12:20:38 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:32.979 12:20:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:32.979 12:20:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:32.979 12:20:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:32.979 12:20:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:32.979 12:20:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:32.979 12:20:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:32.979 12:20:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:32.979 12:20:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:32.979 12:20:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:32.979 12:20:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:06:32.979 12:20:38 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:32.979 12:20:38 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:32.979 12:20:38 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:32.979 12:20:38 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:32.979 12:20:38 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:82:00.0 00:06:32.979 12:20:38 -- common/autotest_common.sh@1579 -- # [[ -z 0000:82:00.0 ]] 00:06:32.979 12:20:38 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2687502 00:06:32.979 12:20:38 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.979 12:20:38 -- common/autotest_common.sh@1585 -- # waitforlisten 2687502 00:06:32.979 12:20:38 -- common/autotest_common.sh@835 -- # '[' -z 2687502 ']' 00:06:32.979 12:20:38 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.979 12:20:38 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.979 12:20:38 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.979 12:20:38 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.979 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.979 [2024-11-20 12:20:38.718953] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:32.979 [2024-11-20 12:20:38.719038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687502 ] 00:06:33.238 [2024-11-20 12:20:38.789086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.238 [2024-11-20 12:20:38.851988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.496 12:20:39 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.497 12:20:39 -- common/autotest_common.sh@868 -- # return 0 00:06:33.497 12:20:39 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:33.497 12:20:39 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:33.497 12:20:39 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:06:36.781 nvme0n1 00:06:36.781 12:20:42 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:37.039 [2024-11-20 12:20:42.558449] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:37.039 [2024-11-20 12:20:42.558496] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:37.039 request: 00:06:37.039 { 00:06:37.039 "nvme_ctrlr_name": "nvme0", 00:06:37.039 "password": "test", 00:06:37.039 "method": "bdev_nvme_opal_revert", 00:06:37.039 "req_id": 1 00:06:37.039 } 00:06:37.039 Got JSON-RPC error response 00:06:37.039 response: 00:06:37.039 { 00:06:37.039 "code": -32603, 00:06:37.039 "message": "Internal error" 00:06:37.039 } 00:06:37.039 12:20:42 -- common/autotest_common.sh@1591 -- # true 00:06:37.039 12:20:42 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:37.039 12:20:42 -- common/autotest_common.sh@1595 -- # killprocess 2687502 00:06:37.039 12:20:42 -- common/autotest_common.sh@954 -- # '[' -z 2687502 ']' 00:06:37.039 12:20:42 -- common/autotest_common.sh@958 -- # kill -0 2687502 00:06:37.039 12:20:42 -- common/autotest_common.sh@959 -- # uname 00:06:37.039 12:20:42 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.039 12:20:42 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2687502 00:06:37.039 12:20:42 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.039 12:20:42 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.039 12:20:42 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2687502' 00:06:37.039 killing process with pid 2687502 00:06:37.039 12:20:42 -- common/autotest_common.sh@973 -- # kill 2687502 00:06:37.039 12:20:42 -- common/autotest_common.sh@978 -- # wait 2687502 00:06:38.941 12:20:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:38.941 12:20:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:38.941 12:20:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:38.941 12:20:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:38.941 12:20:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:38.941 12:20:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.941 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.941 12:20:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:38.941 12:20:44 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:38.941 12:20:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.941 12:20:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.941 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.941 ************************************ 00:06:38.941 START TEST env 00:06:38.941 ************************************ 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:38.941 * Looking for test storage... 00:06:38.941 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.941 12:20:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.941 12:20:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.941 12:20:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.941 12:20:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.941 12:20:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.941 12:20:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.941 12:20:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.941 12:20:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.941 12:20:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.941 12:20:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.941 12:20:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.941 12:20:44 env -- scripts/common.sh@344 -- # case "$op" in 00:06:38.941 12:20:44 env -- scripts/common.sh@345 -- # : 1 00:06:38.941 12:20:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.941 12:20:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.941 12:20:44 env -- scripts/common.sh@365 -- # decimal 1 00:06:38.941 12:20:44 env -- scripts/common.sh@353 -- # local d=1 00:06:38.941 12:20:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.941 12:20:44 env -- scripts/common.sh@355 -- # echo 1 00:06:38.941 12:20:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.941 12:20:44 env -- scripts/common.sh@366 -- # decimal 2 00:06:38.941 12:20:44 env -- scripts/common.sh@353 -- # local d=2 00:06:38.941 12:20:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.941 12:20:44 env -- scripts/common.sh@355 -- # echo 2 00:06:38.941 12:20:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.941 12:20:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.941 12:20:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.941 12:20:44 env -- scripts/common.sh@368 -- # return 0 00:06:38.941 12:20:44 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.942 --rc genhtml_branch_coverage=1 00:06:38.942 --rc genhtml_function_coverage=1 00:06:38.942 --rc genhtml_legend=1 00:06:38.942 --rc geninfo_all_blocks=1 00:06:38.942 --rc geninfo_unexecuted_blocks=1 00:06:38.942 00:06:38.942 ' 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.942 --rc genhtml_branch_coverage=1 00:06:38.942 --rc genhtml_function_coverage=1 00:06:38.942 --rc genhtml_legend=1 00:06:38.942 --rc geninfo_all_blocks=1 00:06:38.942 --rc geninfo_unexecuted_blocks=1 00:06:38.942 00:06:38.942 ' 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.942 --rc genhtml_branch_coverage=1 00:06:38.942 --rc genhtml_function_coverage=1 00:06:38.942 --rc genhtml_legend=1 00:06:38.942 --rc geninfo_all_blocks=1 00:06:38.942 --rc geninfo_unexecuted_blocks=1 00:06:38.942 00:06:38.942 ' 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.942 --rc genhtml_branch_coverage=1 00:06:38.942 --rc genhtml_function_coverage=1 00:06:38.942 --rc genhtml_legend=1 00:06:38.942 --rc geninfo_all_blocks=1 00:06:38.942 --rc geninfo_unexecuted_blocks=1 00:06:38.942 00:06:38.942 ' 00:06:38.942 12:20:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.942 12:20:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.942 12:20:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.942 ************************************ 00:06:38.942 START TEST env_memory 00:06:38.942 ************************************ 00:06:38.942 12:20:44 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.942 00:06:38.942 00:06:38.942 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.942 http://cunit.sourceforge.net/ 00:06:38.942 00:06:38.942 00:06:38.942 Suite: memory 00:06:38.942 Test: alloc and free memory map ...[2024-11-20 12:20:44.540575] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:38.942 passed 00:06:38.942 Test: mem map translation ...[2024-11-20 12:20:44.572476] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:38.942 [2024-11-20 12:20:44.572511] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:38.942 [2024-11-20 12:20:44.572564] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:38.942 [2024-11-20 12:20:44.572579] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:38.942 passed 00:06:38.942 Test: mem map registration ...[2024-11-20 12:20:44.635152] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:38.942 [2024-11-20 12:20:44.635175] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:38.942 passed 00:06:39.201 Test: mem map adjacent registrations ...passed 00:06:39.201 00:06:39.201 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.201 suites 1 1 n/a 0 0 00:06:39.201 tests 4 4 4 0 0 00:06:39.201 asserts 152 152 152 0 n/a 00:06:39.201 00:06:39.201 Elapsed time = 0.220 seconds 00:06:39.201 00:06:39.201 real 0m0.231s 00:06:39.202 user 0m0.218s 00:06:39.202 sys 0m0.012s 00:06:39.202 12:20:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.202 12:20:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:39.202 ************************************ 00:06:39.202 END TEST env_memory 00:06:39.202 ************************************ 00:06:39.202 12:20:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:39.202 12:20:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.202 12:20:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.202 12:20:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.202 ************************************ 00:06:39.202 START TEST env_vtophys 00:06:39.202 ************************************ 00:06:39.202 12:20:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:39.202 EAL: lib.eal log level changed from notice to debug 00:06:39.202 EAL: Detected lcore 0 as core 0 on socket 0 00:06:39.202 EAL: Detected lcore 1 as core 1 on socket 0 00:06:39.202 EAL: Detected lcore 2 as core 2 on socket 0 00:06:39.202 EAL: Detected lcore 3 as core 3 on socket 0 00:06:39.202 EAL: Detected lcore 4 as core 4 on socket 0 00:06:39.202 EAL: Detected lcore 5 as core 5 on socket 0 00:06:39.202 EAL: Detected lcore 6 as core 6 on socket 0 00:06:39.202 EAL: Detected lcore 7 as core 7 on socket 0 00:06:39.202 EAL: Detected lcore 8 as core 0 on socket 1 00:06:39.202 EAL: Detected lcore 9 as core 1 on socket 1 00:06:39.202 EAL: Detected lcore 10 as core 2 on socket 1 00:06:39.202 EAL: Detected lcore 11 as core 3 on socket 1 00:06:39.202 EAL: Detected lcore 12 as core 4 on socket 1 00:06:39.202 EAL: Detected lcore 13 as core 5 on socket 1 00:06:39.202 EAL: Detected lcore 14 as core 6 on socket 1 00:06:39.202 EAL: Detected lcore 15 as core 7 on socket 1 00:06:39.202 EAL: Maximum logical cores by configuration: 128 00:06:39.202 EAL: Detected CPU lcores: 16 00:06:39.202 EAL: Detected NUMA nodes: 2 00:06:39.202 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:39.202 EAL: Detected shared linkage of DPDK 00:06:39.202 EAL: No shared files mode enabled, IPC will be disabled 00:06:39.202 EAL: Bus pci wants IOVA as 'DC' 00:06:39.202 EAL: Buses did not request a specific IOVA mode. 00:06:39.202 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:39.202 EAL: Selected IOVA mode 'VA' 00:06:39.202 EAL: Probing VFIO support... 00:06:39.202 EAL: IOMMU type 1 (Type 1) is supported 00:06:39.202 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:39.202 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:39.202 EAL: VFIO support initialized 00:06:39.202 EAL: Ask a virtual area of 0x2e000 bytes 00:06:39.202 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:39.202 EAL: Setting up physically contiguous memory... 00:06:39.202 EAL: Setting maximum number of open files to 524288 00:06:39.202 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:39.202 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:39.202 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:39.202 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:39.202 EAL: Ask a virtual area of 0x61000 bytes 00:06:39.202 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:39.202 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:39.202 EAL: Ask a virtual area of 0x400000000 bytes 00:06:39.202 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:39.202 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:39.202 EAL: Hugepages will be freed exactly as allocated. 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: TSC frequency is ~2700000 KHz 00:06:39.202 EAL: Main lcore 0 is ready (tid=7f797a68fa00;cpuset=[0]) 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 0 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 2MB 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:39.202 EAL: Mem event callback 'spdk:(nil)' registered 00:06:39.202 00:06:39.202 00:06:39.202 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.202 http://cunit.sourceforge.net/ 00:06:39.202 00:06:39.202 00:06:39.202 Suite: components_suite 00:06:39.202 Test: vtophys_malloc_test ...passed 00:06:39.202 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 4 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 4MB 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was shrunk by 4MB 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 4 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 6MB 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was shrunk by 6MB 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 4 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 10MB 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was shrunk by 10MB 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 4 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 18MB 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was shrunk by 18MB 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.202 EAL: Restoring previous memory policy: 4 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was expanded by 34MB 00:06:39.202 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.202 EAL: request: mp_malloc_sync 00:06:39.202 EAL: No shared files mode enabled, IPC is disabled 00:06:39.202 EAL: Heap on socket 0 was shrunk by 34MB 00:06:39.202 EAL: Trying to obtain current memory policy. 00:06:39.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.203 EAL: Restoring previous memory policy: 4 00:06:39.203 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.203 EAL: request: mp_malloc_sync 00:06:39.203 EAL: No shared files mode enabled, IPC is disabled 00:06:39.203 EAL: Heap on socket 0 was expanded by 66MB 00:06:39.203 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.203 EAL: request: mp_malloc_sync 00:06:39.203 EAL: No shared files mode enabled, IPC is disabled 00:06:39.203 EAL: Heap on socket 0 was shrunk by 66MB 00:06:39.203 EAL: Trying to obtain current memory policy. 00:06:39.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.203 EAL: Restoring previous memory policy: 4 00:06:39.203 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.203 EAL: request: mp_malloc_sync 00:06:39.203 EAL: No shared files mode enabled, IPC is disabled 00:06:39.203 EAL: Heap on socket 0 was expanded by 130MB 00:06:39.203 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.462 EAL: request: mp_malloc_sync 00:06:39.462 EAL: No shared files mode enabled, IPC is disabled 00:06:39.462 EAL: Heap on socket 0 was shrunk by 130MB 00:06:39.462 EAL: Trying to obtain current memory policy. 00:06:39.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.462 EAL: Restoring previous memory policy: 4 00:06:39.462 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.462 EAL: request: mp_malloc_sync 00:06:39.462 EAL: No shared files mode enabled, IPC is disabled 00:06:39.462 EAL: Heap on socket 0 was expanded by 258MB 00:06:39.462 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.462 EAL: request: mp_malloc_sync 00:06:39.462 EAL: No shared files mode enabled, IPC is disabled 00:06:39.462 EAL: Heap on socket 0 was shrunk by 258MB 00:06:39.462 EAL: Trying to obtain current memory policy. 00:06:39.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.462 EAL: Restoring previous memory policy: 4 00:06:39.462 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.462 EAL: request: mp_malloc_sync 00:06:39.462 EAL: No shared files mode enabled, IPC is disabled 00:06:39.462 EAL: Heap on socket 0 was expanded by 514MB 00:06:39.720 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.720 EAL: request: mp_malloc_sync 00:06:39.720 EAL: No shared files mode enabled, IPC is disabled 00:06:39.720 EAL: Heap on socket 0 was shrunk by 514MB 00:06:39.720 EAL: Trying to obtain current memory policy. 00:06:39.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.978 EAL: Restoring previous memory policy: 4 00:06:39.978 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.978 EAL: request: mp_malloc_sync 00:06:39.978 EAL: No shared files mode enabled, IPC is disabled 00:06:39.978 EAL: Heap on socket 0 was expanded by 1026MB 00:06:39.978 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.237 EAL: request: mp_malloc_sync 00:06:40.237 EAL: No shared files mode enabled, IPC is disabled 00:06:40.237 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:40.237 passed 00:06:40.237 00:06:40.237 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.237 suites 1 1 n/a 0 0 00:06:40.237 tests 2 2 2 0 0 00:06:40.237 asserts 497 497 497 0 n/a 00:06:40.237 00:06:40.237 Elapsed time = 0.953 seconds 00:06:40.237 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.237 EAL: request: mp_malloc_sync 00:06:40.237 EAL: No shared files mode enabled, IPC is disabled 00:06:40.237 EAL: Heap on socket 0 was shrunk by 2MB 00:06:40.237 EAL: No shared files mode enabled, IPC is disabled 00:06:40.237 EAL: No shared files mode enabled, IPC is disabled 00:06:40.237 EAL: No shared files mode enabled, IPC is disabled 00:06:40.237 00:06:40.237 real 0m1.087s 00:06:40.237 user 0m0.529s 00:06:40.237 sys 0m0.525s 00:06:40.237 12:20:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.237 12:20:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:40.237 ************************************ 00:06:40.237 END TEST env_vtophys 00:06:40.237 ************************************ 00:06:40.237 12:20:45 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.237 12:20:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.237 12:20:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.237 12:20:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.237 ************************************ 00:06:40.237 START TEST env_pci 00:06:40.237 ************************************ 00:06:40.237 12:20:45 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.237 00:06:40.237 00:06:40.237 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.237 http://cunit.sourceforge.net/ 00:06:40.237 00:06:40.237 00:06:40.237 Suite: pci 00:06:40.237 Test: pci_hook ...[2024-11-20 12:20:45.903319] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2688167 has claimed it 00:06:40.237 EAL: Cannot find device (10000:00:01.0) 00:06:40.237 EAL: Failed to attach device on primary process 00:06:40.237 passed 00:06:40.237 00:06:40.237 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.237 suites 1 1 n/a 0 0 00:06:40.237 tests 1 1 1 0 0 00:06:40.237 asserts 25 25 25 0 n/a 00:06:40.237 00:06:40.237 Elapsed time = 0.021 seconds 00:06:40.237 00:06:40.237 real 0m0.040s 00:06:40.237 user 0m0.014s 00:06:40.237 sys 0m0.026s 00:06:40.237 12:20:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.237 12:20:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:40.238 ************************************ 00:06:40.238 END TEST env_pci 00:06:40.238 ************************************ 00:06:40.238 12:20:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:40.238 12:20:45 env -- env/env.sh@15 -- # uname 00:06:40.238 12:20:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:40.238 12:20:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:40.238 12:20:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:40.238 12:20:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.238 12:20:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.238 12:20:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.238 ************************************ 00:06:40.238 START TEST env_dpdk_post_init 00:06:40.238 ************************************ 00:06:40.238 12:20:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:40.498 EAL: Detected CPU lcores: 16 00:06:40.498 EAL: Detected NUMA nodes: 2 00:06:40.498 EAL: Detected shared linkage of DPDK 00:06:40.498 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:40.498 EAL: Selected IOVA mode 'VA' 00:06:40.498 EAL: VFIO support initialized 00:06:40.498 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:40.498 EAL: Using IOMMU type 1 (Type 1) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:06:40.498 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:06:40.758 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:06:40.758 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:06:40.758 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:06:40.758 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:06:41.328 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:06:44.611 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:06:44.611 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:06:44.871 Starting DPDK initialization... 00:06:44.871 Starting SPDK post initialization... 00:06:44.871 SPDK NVMe probe 00:06:44.871 Attaching to 0000:82:00.0 00:06:44.871 Attached to 0000:82:00.0 00:06:44.871 Cleaning up... 00:06:44.871 00:06:44.871 real 0m4.468s 00:06:44.871 user 0m3.084s 00:06:44.871 sys 0m0.447s 00:06:44.871 12:20:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.871 12:20:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.871 ************************************ 00:06:44.871 END TEST env_dpdk_post_init 00:06:44.871 ************************************ 00:06:44.871 12:20:50 env -- env/env.sh@26 -- # uname 00:06:44.871 12:20:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:44.871 12:20:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:44.871 12:20:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.871 12:20:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.871 12:20:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.871 ************************************ 00:06:44.871 START TEST env_mem_callbacks 00:06:44.871 ************************************ 00:06:44.871 12:20:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:44.871 EAL: Detected CPU lcores: 16 00:06:44.871 EAL: Detected NUMA nodes: 2 00:06:44.872 EAL: Detected shared linkage of DPDK 00:06:44.872 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:44.872 EAL: Selected IOVA mode 'VA' 00:06:44.872 EAL: VFIO support initialized 00:06:44.872 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:44.872 00:06:44.872 00:06:44.872 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.872 http://cunit.sourceforge.net/ 00:06:44.872 00:06:44.872 00:06:44.872 Suite: memory 00:06:44.872 Test: test ... 00:06:44.872 register 0x200000200000 2097152 00:06:44.872 malloc 3145728 00:06:44.872 register 0x200000400000 4194304 00:06:44.872 buf 0x200000500000 len 3145728 PASSED 00:06:44.872 malloc 64 00:06:44.872 buf 0x2000004fff40 len 64 PASSED 00:06:44.872 malloc 4194304 00:06:44.872 register 0x200000800000 6291456 00:06:44.872 buf 0x200000a00000 len 4194304 PASSED 00:06:44.872 free 0x200000500000 3145728 00:06:44.872 free 0x2000004fff40 64 00:06:44.872 unregister 0x200000400000 4194304 PASSED 00:06:44.872 free 0x200000a00000 4194304 00:06:44.872 unregister 0x200000800000 6291456 PASSED 00:06:44.872 malloc 8388608 00:06:44.872 register 0x200000400000 10485760 00:06:44.872 buf 0x200000600000 len 8388608 PASSED 00:06:44.872 free 0x200000600000 8388608 00:06:44.872 unregister 0x200000400000 10485760 PASSED 00:06:44.872 passed 00:06:44.872 00:06:44.872 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.872 suites 1 1 n/a 0 0 00:06:44.872 tests 1 1 1 0 0 00:06:44.872 asserts 15 15 15 0 n/a 00:06:44.872 00:06:44.872 Elapsed time = 0.006 seconds 00:06:44.872 00:06:44.872 real 0m0.067s 00:06:44.872 user 0m0.024s 00:06:44.872 sys 0m0.042s 00:06:44.872 12:20:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.872 12:20:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:44.872 ************************************ 00:06:44.872 END TEST env_mem_callbacks 00:06:44.872 ************************************ 00:06:44.872 00:06:44.872 real 0m6.288s 00:06:44.872 user 0m4.062s 00:06:44.872 sys 0m1.299s 00:06:44.872 12:20:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.872 12:20:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.872 ************************************ 00:06:44.872 END TEST env 00:06:44.872 ************************************ 00:06:44.872 12:20:50 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:44.872 12:20:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.872 12:20:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.872 12:20:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.872 ************************************ 00:06:44.872 START TEST rpc 00:06:44.872 ************************************ 00:06:44.872 12:20:50 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.131 * Looking for test storage... 00:06:45.131 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:45.131 12:20:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.131 12:20:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.131 12:20:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.131 12:20:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.131 12:20:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.131 12:20:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.131 12:20:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.131 12:20:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.131 12:20:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.131 12:20:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.131 12:20:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.131 12:20:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.131 12:20:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.131 12:20:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.131 12:20:50 rpc -- scripts/common.sh@345 -- # : 1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.131 12:20:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.131 12:20:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@353 -- # local d=1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.131 12:20:50 rpc -- scripts/common.sh@355 -- # echo 1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.131 12:20:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:45.390 12:20:50 rpc -- scripts/common.sh@353 -- # local d=2 00:06:45.390 12:20:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.390 12:20:50 rpc -- scripts/common.sh@355 -- # echo 2 00:06:45.390 12:20:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.390 12:20:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.390 12:20:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.390 12:20:50 rpc -- scripts/common.sh@368 -- # return 0 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.390 --rc genhtml_branch_coverage=1 00:06:45.390 --rc genhtml_function_coverage=1 00:06:45.390 --rc genhtml_legend=1 00:06:45.390 --rc geninfo_all_blocks=1 00:06:45.390 --rc geninfo_unexecuted_blocks=1 00:06:45.390 00:06:45.390 ' 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.390 --rc genhtml_branch_coverage=1 00:06:45.390 --rc genhtml_function_coverage=1 00:06:45.390 --rc genhtml_legend=1 00:06:45.390 --rc geninfo_all_blocks=1 00:06:45.390 --rc geninfo_unexecuted_blocks=1 00:06:45.390 00:06:45.390 ' 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.390 --rc genhtml_branch_coverage=1 00:06:45.390 --rc genhtml_function_coverage=1 00:06:45.390 --rc genhtml_legend=1 00:06:45.390 --rc geninfo_all_blocks=1 00:06:45.390 --rc geninfo_unexecuted_blocks=1 00:06:45.390 00:06:45.390 ' 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.390 --rc genhtml_branch_coverage=1 00:06:45.390 --rc genhtml_function_coverage=1 00:06:45.390 --rc genhtml_legend=1 00:06:45.390 --rc geninfo_all_blocks=1 00:06:45.390 --rc geninfo_unexecuted_blocks=1 00:06:45.390 00:06:45.390 ' 00:06:45.390 12:20:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2688692 00:06:45.390 12:20:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:45.390 12:20:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.390 12:20:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2688692 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 2688692 ']' 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.390 12:20:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.390 [2024-11-20 12:20:50.962794] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:45.390 [2024-11-20 12:20:50.962888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688692 ] 00:06:45.390 [2024-11-20 12:20:51.035117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.390 [2024-11-20 12:20:51.098984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:45.390 [2024-11-20 12:20:51.099045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2688692' to capture a snapshot of events at runtime. 00:06:45.390 [2024-11-20 12:20:51.099060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.390 [2024-11-20 12:20:51.099073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.390 [2024-11-20 12:20:51.099085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2688692 for offline analysis/debug. 00:06:45.390 [2024-11-20 12:20:51.099615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.649 12:20:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.649 12:20:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.649 12:20:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:45.649 12:20:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:45.649 12:20:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:45.649 12:20:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:45.649 12:20:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.649 12:20:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.649 12:20:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 ************************************ 00:06:45.649 START TEST rpc_integrity 00:06:45.649 ************************************ 00:06:45.649 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:45.649 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:45.649 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.649 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.649 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:45.649 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:45.908 { 00:06:45.908 "name": "Malloc0", 00:06:45.908 "aliases": [ 00:06:45.908 "2b39a1f1-c683-4eb7-9b96-74ef0bf6e1ec" 00:06:45.908 ], 00:06:45.908 "product_name": "Malloc disk", 00:06:45.908 "block_size": 512, 00:06:45.908 "num_blocks": 16384, 00:06:45.908 "uuid": "2b39a1f1-c683-4eb7-9b96-74ef0bf6e1ec", 00:06:45.908 "assigned_rate_limits": { 00:06:45.908 "rw_ios_per_sec": 0, 00:06:45.908 "rw_mbytes_per_sec": 0, 00:06:45.908 "r_mbytes_per_sec": 0, 00:06:45.908 "w_mbytes_per_sec": 0 00:06:45.908 }, 00:06:45.908 "claimed": false, 00:06:45.908 "zoned": false, 00:06:45.908 "supported_io_types": { 00:06:45.908 "read": true, 00:06:45.908 "write": true, 00:06:45.908 "unmap": true, 00:06:45.908 "flush": true, 00:06:45.908 "reset": true, 00:06:45.908 "nvme_admin": false, 00:06:45.908 "nvme_io": false, 00:06:45.908 "nvme_io_md": false, 00:06:45.908 "write_zeroes": true, 00:06:45.908 "zcopy": true, 00:06:45.908 "get_zone_info": false, 00:06:45.908 "zone_management": false, 00:06:45.908 "zone_append": false, 00:06:45.908 "compare": false, 00:06:45.908 "compare_and_write": false, 00:06:45.908 "abort": true, 00:06:45.908 "seek_hole": false, 00:06:45.908 "seek_data": false, 00:06:45.908 "copy": true, 00:06:45.908 "nvme_iov_md": false 00:06:45.908 }, 00:06:45.908 "memory_domains": [ 00:06:45.908 { 00:06:45.908 "dma_device_id": "system", 00:06:45.908 "dma_device_type": 1 00:06:45.908 }, 00:06:45.908 { 00:06:45.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.908 "dma_device_type": 2 00:06:45.908 } 00:06:45.908 ], 00:06:45.908 "driver_specific": {} 00:06:45.908 } 00:06:45.908 ]' 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.908 [2024-11-20 12:20:51.487495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:45.908 [2024-11-20 12:20:51.487551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.908 [2024-11-20 12:20:51.487576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23050b0 00:06:45.908 [2024-11-20 12:20:51.487600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.908 [2024-11-20 12:20:51.489144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.908 [2024-11-20 12:20:51.489170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:45.908 Passthru0 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.908 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.908 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:45.908 { 00:06:45.908 "name": "Malloc0", 00:06:45.908 "aliases": [ 00:06:45.908 "2b39a1f1-c683-4eb7-9b96-74ef0bf6e1ec" 00:06:45.908 ], 00:06:45.908 "product_name": "Malloc disk", 00:06:45.908 "block_size": 512, 00:06:45.908 "num_blocks": 16384, 00:06:45.908 "uuid": "2b39a1f1-c683-4eb7-9b96-74ef0bf6e1ec", 00:06:45.908 "assigned_rate_limits": { 00:06:45.908 "rw_ios_per_sec": 0, 00:06:45.908 "rw_mbytes_per_sec": 0, 00:06:45.908 "r_mbytes_per_sec": 0, 00:06:45.908 "w_mbytes_per_sec": 0 00:06:45.908 }, 00:06:45.908 "claimed": true, 00:06:45.908 "claim_type": "exclusive_write", 00:06:45.908 "zoned": false, 00:06:45.908 "supported_io_types": { 00:06:45.908 "read": true, 00:06:45.908 "write": true, 00:06:45.908 "unmap": true, 00:06:45.908 "flush": true, 00:06:45.908 "reset": true, 00:06:45.908 "nvme_admin": false, 00:06:45.908 "nvme_io": false, 00:06:45.908 "nvme_io_md": false, 00:06:45.908 "write_zeroes": true, 00:06:45.908 "zcopy": true, 00:06:45.908 "get_zone_info": false, 00:06:45.908 "zone_management": false, 00:06:45.908 "zone_append": false, 00:06:45.908 "compare": false, 00:06:45.908 "compare_and_write": false, 00:06:45.908 "abort": true, 00:06:45.908 "seek_hole": false, 00:06:45.908 "seek_data": false, 00:06:45.908 "copy": true, 00:06:45.908 "nvme_iov_md": false 00:06:45.908 }, 00:06:45.908 "memory_domains": [ 00:06:45.908 { 00:06:45.908 "dma_device_id": "system", 00:06:45.908 "dma_device_type": 1 00:06:45.908 }, 00:06:45.908 { 00:06:45.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.908 "dma_device_type": 2 00:06:45.908 } 00:06:45.908 ], 00:06:45.908 "driver_specific": {} 00:06:45.908 }, 00:06:45.908 { 00:06:45.908 "name": "Passthru0", 00:06:45.908 "aliases": [ 00:06:45.908 "e2526571-f4b3-5598-ab98-79d5448e6537" 00:06:45.908 ], 00:06:45.908 "product_name": "passthru", 00:06:45.908 "block_size": 512, 00:06:45.908 "num_blocks": 16384, 00:06:45.908 "uuid": "e2526571-f4b3-5598-ab98-79d5448e6537", 00:06:45.908 "assigned_rate_limits": { 00:06:45.908 "rw_ios_per_sec": 0, 00:06:45.908 "rw_mbytes_per_sec": 0, 00:06:45.908 "r_mbytes_per_sec": 0, 00:06:45.908 "w_mbytes_per_sec": 0 00:06:45.908 }, 00:06:45.908 "claimed": false, 00:06:45.908 "zoned": false, 00:06:45.908 "supported_io_types": { 00:06:45.908 "read": true, 00:06:45.908 "write": true, 00:06:45.909 "unmap": true, 00:06:45.909 "flush": true, 00:06:45.909 "reset": true, 00:06:45.909 "nvme_admin": false, 00:06:45.909 "nvme_io": false, 00:06:45.909 "nvme_io_md": false, 00:06:45.909 "write_zeroes": true, 00:06:45.909 "zcopy": true, 00:06:45.909 "get_zone_info": false, 00:06:45.909 "zone_management": false, 00:06:45.909 "zone_append": false, 00:06:45.909 "compare": false, 00:06:45.909 "compare_and_write": false, 00:06:45.909 "abort": true, 00:06:45.909 "seek_hole": false, 00:06:45.909 "seek_data": false, 00:06:45.909 "copy": true, 00:06:45.909 "nvme_iov_md": false 00:06:45.909 }, 00:06:45.909 "memory_domains": [ 00:06:45.909 { 00:06:45.909 "dma_device_id": "system", 00:06:45.909 "dma_device_type": 1 00:06:45.909 }, 00:06:45.909 { 00:06:45.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.909 "dma_device_type": 2 00:06:45.909 } 00:06:45.909 ], 00:06:45.909 "driver_specific": { 00:06:45.909 "passthru": { 00:06:45.909 "name": "Passthru0", 00:06:45.909 "base_bdev_name": "Malloc0" 00:06:45.909 } 00:06:45.909 } 00:06:45.909 } 00:06:45.909 ]' 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:45.909 12:20:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:45.909 00:06:45.909 real 0m0.271s 00:06:45.909 user 0m0.174s 00:06:45.909 sys 0m0.035s 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 ************************************ 00:06:45.909 END TEST rpc_integrity 00:06:45.909 ************************************ 00:06:45.909 12:20:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:45.909 12:20:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.909 12:20:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.909 12:20:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 ************************************ 00:06:45.909 START TEST rpc_plugins 00:06:45.909 ************************************ 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:45.909 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.909 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:45.909 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.909 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.167 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.167 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:46.167 { 00:06:46.167 "name": "Malloc1", 00:06:46.167 "aliases": [ 00:06:46.167 "4a2484b3-a6cf-4d15-9699-0b9ba6a669df" 00:06:46.167 ], 00:06:46.167 "product_name": "Malloc disk", 00:06:46.167 "block_size": 4096, 00:06:46.167 "num_blocks": 256, 00:06:46.167 "uuid": "4a2484b3-a6cf-4d15-9699-0b9ba6a669df", 00:06:46.167 "assigned_rate_limits": { 00:06:46.167 "rw_ios_per_sec": 0, 00:06:46.167 "rw_mbytes_per_sec": 0, 00:06:46.167 "r_mbytes_per_sec": 0, 00:06:46.167 "w_mbytes_per_sec": 0 00:06:46.167 }, 00:06:46.167 "claimed": false, 00:06:46.167 "zoned": false, 00:06:46.167 "supported_io_types": { 00:06:46.167 "read": true, 00:06:46.167 "write": true, 00:06:46.167 "unmap": true, 00:06:46.167 "flush": true, 00:06:46.167 "reset": true, 00:06:46.167 "nvme_admin": false, 00:06:46.167 "nvme_io": false, 00:06:46.167 "nvme_io_md": false, 00:06:46.167 "write_zeroes": true, 00:06:46.167 "zcopy": true, 00:06:46.167 "get_zone_info": false, 00:06:46.167 "zone_management": false, 00:06:46.167 "zone_append": false, 00:06:46.167 "compare": false, 00:06:46.167 "compare_and_write": false, 00:06:46.167 "abort": true, 00:06:46.168 "seek_hole": false, 00:06:46.168 "seek_data": false, 00:06:46.168 "copy": true, 00:06:46.168 "nvme_iov_md": false 00:06:46.168 }, 00:06:46.168 "memory_domains": [ 00:06:46.168 { 00:06:46.168 "dma_device_id": "system", 00:06:46.168 "dma_device_type": 1 00:06:46.168 }, 00:06:46.168 { 00:06:46.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.168 "dma_device_type": 2 00:06:46.168 } 00:06:46.168 ], 00:06:46.168 "driver_specific": {} 00:06:46.168 } 00:06:46.168 ]' 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:46.168 12:20:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:46.168 00:06:46.168 real 0m0.144s 00:06:46.168 user 0m0.095s 00:06:46.168 sys 0m0.013s 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.168 12:20:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 ************************************ 00:06:46.168 END TEST rpc_plugins 00:06:46.168 ************************************ 00:06:46.168 12:20:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:46.168 12:20:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.168 12:20:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.168 12:20:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 ************************************ 00:06:46.168 START TEST rpc_trace_cmd_test 00:06:46.168 ************************************ 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:46.168 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2688692", 00:06:46.168 "tpoint_group_mask": "0x8", 00:06:46.168 "iscsi_conn": { 00:06:46.168 "mask": "0x2", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "scsi": { 00:06:46.168 "mask": "0x4", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "bdev": { 00:06:46.168 "mask": "0x8", 00:06:46.168 "tpoint_mask": "0xffffffffffffffff" 00:06:46.168 }, 00:06:46.168 "nvmf_rdma": { 00:06:46.168 "mask": "0x10", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "nvmf_tcp": { 00:06:46.168 "mask": "0x20", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "ftl": { 00:06:46.168 "mask": "0x40", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "blobfs": { 00:06:46.168 "mask": "0x80", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "dsa": { 00:06:46.168 "mask": "0x200", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "thread": { 00:06:46.168 "mask": "0x400", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "nvme_pcie": { 00:06:46.168 "mask": "0x800", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "iaa": { 00:06:46.168 "mask": "0x1000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "nvme_tcp": { 00:06:46.168 "mask": "0x2000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "bdev_nvme": { 00:06:46.168 "mask": "0x4000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "sock": { 00:06:46.168 "mask": "0x8000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "blob": { 00:06:46.168 "mask": "0x10000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "bdev_raid": { 00:06:46.168 "mask": "0x20000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 }, 00:06:46.168 "scheduler": { 00:06:46.168 "mask": "0x40000", 00:06:46.168 "tpoint_mask": "0x0" 00:06:46.168 } 00:06:46.168 }' 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:46.168 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:46.427 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:46.427 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:46.427 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:46.427 12:20:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:46.427 12:20:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:46.427 12:20:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:46.427 12:20:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:46.427 00:06:46.427 real 0m0.253s 00:06:46.427 user 0m0.220s 00:06:46.427 sys 0m0.024s 00:06:46.427 12:20:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.427 12:20:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.427 ************************************ 00:06:46.427 END TEST rpc_trace_cmd_test 00:06:46.427 ************************************ 00:06:46.427 12:20:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:46.427 12:20:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:46.427 12:20:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:46.427 12:20:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.427 12:20:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.427 12:20:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.427 ************************************ 00:06:46.427 START TEST rpc_daemon_integrity 00:06:46.427 ************************************ 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.427 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.427 { 00:06:46.427 "name": "Malloc2", 00:06:46.427 "aliases": [ 00:06:46.427 "8a610d9f-1cd0-4ae1-90b2-923df795cbc4" 00:06:46.427 ], 00:06:46.427 "product_name": "Malloc disk", 00:06:46.427 "block_size": 512, 00:06:46.427 "num_blocks": 16384, 00:06:46.427 "uuid": "8a610d9f-1cd0-4ae1-90b2-923df795cbc4", 00:06:46.427 "assigned_rate_limits": { 00:06:46.427 "rw_ios_per_sec": 0, 00:06:46.427 "rw_mbytes_per_sec": 0, 00:06:46.427 "r_mbytes_per_sec": 0, 00:06:46.427 "w_mbytes_per_sec": 0 00:06:46.427 }, 00:06:46.427 "claimed": false, 00:06:46.427 "zoned": false, 00:06:46.427 "supported_io_types": { 00:06:46.427 "read": true, 00:06:46.427 "write": true, 00:06:46.427 "unmap": true, 00:06:46.427 "flush": true, 00:06:46.427 "reset": true, 00:06:46.427 "nvme_admin": false, 00:06:46.427 "nvme_io": false, 00:06:46.427 "nvme_io_md": false, 00:06:46.427 "write_zeroes": true, 00:06:46.427 "zcopy": true, 00:06:46.427 "get_zone_info": false, 00:06:46.427 "zone_management": false, 00:06:46.427 "zone_append": false, 00:06:46.427 "compare": false, 00:06:46.427 "compare_and_write": false, 00:06:46.427 "abort": true, 00:06:46.427 "seek_hole": false, 00:06:46.427 "seek_data": false, 00:06:46.427 "copy": true, 00:06:46.427 "nvme_iov_md": false 00:06:46.427 }, 00:06:46.427 "memory_domains": [ 00:06:46.427 { 00:06:46.427 "dma_device_id": "system", 00:06:46.427 "dma_device_type": 1 00:06:46.427 }, 00:06:46.427 { 00:06:46.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.427 "dma_device_type": 2 00:06:46.427 } 00:06:46.427 ], 00:06:46.427 "driver_specific": {} 00:06:46.427 } 00:06:46.427 ]' 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.686 [2024-11-20 12:20:52.241677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:46.686 [2024-11-20 12:20:52.241722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.686 [2024-11-20 12:20:52.241759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2304ce0 00:06:46.686 [2024-11-20 12:20:52.241776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.686 [2024-11-20 12:20:52.243191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.686 [2024-11-20 12:20:52.243217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:46.686 Passthru0 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:46.686 { 00:06:46.686 "name": "Malloc2", 00:06:46.686 "aliases": [ 00:06:46.686 "8a610d9f-1cd0-4ae1-90b2-923df795cbc4" 00:06:46.686 ], 00:06:46.686 "product_name": "Malloc disk", 00:06:46.686 "block_size": 512, 00:06:46.686 "num_blocks": 16384, 00:06:46.686 "uuid": "8a610d9f-1cd0-4ae1-90b2-923df795cbc4", 00:06:46.686 "assigned_rate_limits": { 00:06:46.686 "rw_ios_per_sec": 0, 00:06:46.686 "rw_mbytes_per_sec": 0, 00:06:46.686 "r_mbytes_per_sec": 0, 00:06:46.686 "w_mbytes_per_sec": 0 00:06:46.686 }, 00:06:46.686 "claimed": true, 00:06:46.686 "claim_type": "exclusive_write", 00:06:46.686 "zoned": false, 00:06:46.686 "supported_io_types": { 00:06:46.686 "read": true, 00:06:46.686 "write": true, 00:06:46.686 "unmap": true, 00:06:46.686 "flush": true, 00:06:46.686 "reset": true, 00:06:46.686 "nvme_admin": false, 00:06:46.686 "nvme_io": false, 00:06:46.686 "nvme_io_md": false, 00:06:46.686 "write_zeroes": true, 00:06:46.686 "zcopy": true, 00:06:46.686 "get_zone_info": false, 00:06:46.686 "zone_management": false, 00:06:46.686 "zone_append": false, 00:06:46.686 "compare": false, 00:06:46.686 "compare_and_write": false, 00:06:46.686 "abort": true, 00:06:46.686 "seek_hole": false, 00:06:46.686 "seek_data": false, 00:06:46.686 "copy": true, 00:06:46.686 "nvme_iov_md": false 00:06:46.686 }, 00:06:46.686 "memory_domains": [ 00:06:46.686 { 00:06:46.686 "dma_device_id": "system", 00:06:46.686 "dma_device_type": 1 00:06:46.686 }, 00:06:46.686 { 00:06:46.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.686 "dma_device_type": 2 00:06:46.686 } 00:06:46.686 ], 00:06:46.686 "driver_specific": {} 00:06:46.686 }, 00:06:46.686 { 00:06:46.686 "name": "Passthru0", 00:06:46.686 "aliases": [ 00:06:46.686 "d7b95fed-831b-5d16-b487-0c44dd15f3d4" 00:06:46.686 ], 00:06:46.686 "product_name": "passthru", 00:06:46.686 "block_size": 512, 00:06:46.686 "num_blocks": 16384, 00:06:46.686 "uuid": "d7b95fed-831b-5d16-b487-0c44dd15f3d4", 00:06:46.686 "assigned_rate_limits": { 00:06:46.686 "rw_ios_per_sec": 0, 00:06:46.686 "rw_mbytes_per_sec": 0, 00:06:46.686 "r_mbytes_per_sec": 0, 00:06:46.686 "w_mbytes_per_sec": 0 00:06:46.686 }, 00:06:46.686 "claimed": false, 00:06:46.686 "zoned": false, 00:06:46.686 "supported_io_types": { 00:06:46.686 "read": true, 00:06:46.686 "write": true, 00:06:46.686 "unmap": true, 00:06:46.686 "flush": true, 00:06:46.686 "reset": true, 00:06:46.686 "nvme_admin": false, 00:06:46.686 "nvme_io": false, 00:06:46.686 "nvme_io_md": false, 00:06:46.686 "write_zeroes": true, 00:06:46.686 "zcopy": true, 00:06:46.686 "get_zone_info": false, 00:06:46.686 "zone_management": false, 00:06:46.686 "zone_append": false, 00:06:46.686 "compare": false, 00:06:46.686 "compare_and_write": false, 00:06:46.686 "abort": true, 00:06:46.686 "seek_hole": false, 00:06:46.686 "seek_data": false, 00:06:46.686 "copy": true, 00:06:46.686 "nvme_iov_md": false 00:06:46.686 }, 00:06:46.686 "memory_domains": [ 00:06:46.686 { 00:06:46.686 "dma_device_id": "system", 00:06:46.686 "dma_device_type": 1 00:06:46.686 }, 00:06:46.686 { 00:06:46.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.686 "dma_device_type": 2 00:06:46.686 } 00:06:46.686 ], 00:06:46.686 "driver_specific": { 00:06:46.686 "passthru": { 00:06:46.686 "name": "Passthru0", 00:06:46.686 "base_bdev_name": "Malloc2" 00:06:46.686 } 00:06:46.686 } 00:06:46.686 } 00:06:46.686 ]' 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:46.686 12:20:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:46.687 00:06:46.687 real 0m0.274s 00:06:46.687 user 0m0.184s 00:06:46.687 sys 0m0.029s 00:06:46.687 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.687 12:20:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.687 ************************************ 00:06:46.687 END TEST rpc_daemon_integrity 00:06:46.687 ************************************ 00:06:46.687 12:20:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:46.687 12:20:52 rpc -- rpc/rpc.sh@84 -- # killprocess 2688692 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 2688692 ']' 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@958 -- # kill -0 2688692 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@959 -- # uname 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688692 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688692' 00:06:46.687 killing process with pid 2688692 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@973 -- # kill 2688692 00:06:46.687 12:20:52 rpc -- common/autotest_common.sh@978 -- # wait 2688692 00:06:47.278 00:06:47.278 real 0m2.127s 00:06:47.278 user 0m2.758s 00:06:47.278 sys 0m0.624s 00:06:47.278 12:20:52 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.278 12:20:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.278 ************************************ 00:06:47.278 END TEST rpc 00:06:47.278 ************************************ 00:06:47.278 12:20:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:47.278 12:20:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.278 12:20:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.278 12:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.278 ************************************ 00:06:47.278 START TEST skip_rpc 00:06:47.278 ************************************ 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:47.278 * Looking for test storage... 00:06:47.278 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.278 12:20:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.278 --rc genhtml_branch_coverage=1 00:06:47.278 --rc genhtml_function_coverage=1 00:06:47.278 --rc genhtml_legend=1 00:06:47.278 --rc geninfo_all_blocks=1 00:06:47.278 --rc geninfo_unexecuted_blocks=1 00:06:47.278 00:06:47.278 ' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.278 --rc genhtml_branch_coverage=1 00:06:47.278 --rc genhtml_function_coverage=1 00:06:47.278 --rc genhtml_legend=1 00:06:47.278 --rc geninfo_all_blocks=1 00:06:47.278 --rc geninfo_unexecuted_blocks=1 00:06:47.278 00:06:47.278 ' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.278 --rc genhtml_branch_coverage=1 00:06:47.278 --rc genhtml_function_coverage=1 00:06:47.278 --rc genhtml_legend=1 00:06:47.278 --rc geninfo_all_blocks=1 00:06:47.278 --rc geninfo_unexecuted_blocks=1 00:06:47.278 00:06:47.278 ' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.278 --rc genhtml_branch_coverage=1 00:06:47.278 --rc genhtml_function_coverage=1 00:06:47.278 --rc genhtml_legend=1 00:06:47.278 --rc geninfo_all_blocks=1 00:06:47.278 --rc geninfo_unexecuted_blocks=1 00:06:47.278 00:06:47.278 ' 00:06:47.278 12:20:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:47.278 12:20:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:47.278 12:20:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.278 12:20:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.278 ************************************ 00:06:47.278 START TEST skip_rpc 00:06:47.278 ************************************ 00:06:47.278 12:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:47.278 12:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2689047 00:06:47.278 12:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:47.278 12:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.278 12:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:47.278 [2024-11-20 12:20:53.036780] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:47.278 [2024-11-20 12:20:53.036870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689047 ] 00:06:47.537 [2024-11-20 12:20:53.107357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.537 [2024-11-20 12:20:53.170531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.802 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2689047 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2689047 ']' 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2689047 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.803 12:20:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689047 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689047' 00:06:52.803 killing process with pid 2689047 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2689047 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2689047 00:06:52.803 00:06:52.803 real 0m5.351s 00:06:52.803 user 0m5.071s 00:06:52.803 sys 0m0.291s 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.803 12:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.803 ************************************ 00:06:52.803 END TEST skip_rpc 00:06:52.803 ************************************ 00:06:52.803 12:20:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:52.803 12:20:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.803 12:20:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.803 12:20:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.803 ************************************ 00:06:52.803 START TEST skip_rpc_with_json 00:06:52.803 ************************************ 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2689545 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2689545 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2689545 ']' 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.803 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:52.803 [2024-11-20 12:20:58.490130] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:52.803 [2024-11-20 12:20:58.490298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689545 ] 00:06:53.061 [2024-11-20 12:20:58.594488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.061 [2024-11-20 12:20:58.657291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.321 [2024-11-20 12:20:58.910749] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:53.321 request: 00:06:53.321 { 00:06:53.321 "trtype": "tcp", 00:06:53.321 "method": "nvmf_get_transports", 00:06:53.321 "req_id": 1 00:06:53.321 } 00:06:53.321 Got JSON-RPC error response 00:06:53.321 response: 00:06:53.321 { 00:06:53.321 "code": -19, 00:06:53.321 "message": "No such device" 00:06:53.321 } 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.321 [2024-11-20 12:20:58.918887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.321 12:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.321 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.321 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:53.321 { 00:06:53.321 "subsystems": [ 00:06:53.321 { 00:06:53.321 "subsystem": "fsdev", 00:06:53.321 "config": [ 00:06:53.321 { 00:06:53.321 "method": "fsdev_set_opts", 00:06:53.321 "params": { 00:06:53.321 "fsdev_io_pool_size": 65535, 00:06:53.321 "fsdev_io_cache_size": 256 00:06:53.321 } 00:06:53.321 } 00:06:53.321 ] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "keyring", 00:06:53.321 "config": [] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "iobuf", 00:06:53.321 "config": [ 00:06:53.321 { 00:06:53.321 "method": "iobuf_set_options", 00:06:53.321 "params": { 00:06:53.321 "small_pool_count": 8192, 00:06:53.321 "large_pool_count": 1024, 00:06:53.321 "small_bufsize": 8192, 00:06:53.321 "large_bufsize": 135168, 00:06:53.321 "enable_numa": false 00:06:53.321 } 00:06:53.321 } 00:06:53.321 ] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "sock", 00:06:53.321 "config": [ 00:06:53.321 { 00:06:53.321 "method": "sock_set_default_impl", 00:06:53.321 "params": { 00:06:53.321 "impl_name": "posix" 00:06:53.321 } 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "method": "sock_impl_set_options", 00:06:53.321 "params": { 00:06:53.321 "impl_name": "ssl", 00:06:53.321 "recv_buf_size": 4096, 00:06:53.321 "send_buf_size": 4096, 00:06:53.321 "enable_recv_pipe": true, 00:06:53.321 "enable_quickack": false, 00:06:53.321 "enable_placement_id": 0, 00:06:53.321 "enable_zerocopy_send_server": true, 00:06:53.321 "enable_zerocopy_send_client": false, 00:06:53.321 "zerocopy_threshold": 0, 00:06:53.321 "tls_version": 0, 00:06:53.321 "enable_ktls": false 00:06:53.321 } 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "method": "sock_impl_set_options", 00:06:53.321 "params": { 00:06:53.321 "impl_name": "posix", 00:06:53.321 "recv_buf_size": 2097152, 00:06:53.321 "send_buf_size": 2097152, 00:06:53.321 "enable_recv_pipe": true, 00:06:53.321 "enable_quickack": false, 00:06:53.321 "enable_placement_id": 0, 00:06:53.321 "enable_zerocopy_send_server": true, 00:06:53.321 "enable_zerocopy_send_client": false, 00:06:53.321 "zerocopy_threshold": 0, 00:06:53.321 "tls_version": 0, 00:06:53.321 "enable_ktls": false 00:06:53.321 } 00:06:53.321 } 00:06:53.321 ] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "vmd", 00:06:53.321 "config": [] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "accel", 00:06:53.321 "config": [ 00:06:53.321 { 00:06:53.321 "method": "accel_set_options", 00:06:53.321 "params": { 00:06:53.321 "small_cache_size": 128, 00:06:53.321 "large_cache_size": 16, 00:06:53.321 "task_count": 2048, 00:06:53.321 "sequence_count": 2048, 00:06:53.321 "buf_count": 2048 00:06:53.321 } 00:06:53.321 } 00:06:53.321 ] 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "subsystem": "bdev", 00:06:53.321 "config": [ 00:06:53.321 { 00:06:53.321 "method": "bdev_set_options", 00:06:53.321 "params": { 00:06:53.321 "bdev_io_pool_size": 65535, 00:06:53.321 "bdev_io_cache_size": 256, 00:06:53.321 "bdev_auto_examine": true, 00:06:53.321 "iobuf_small_cache_size": 128, 00:06:53.321 "iobuf_large_cache_size": 16 00:06:53.321 } 00:06:53.321 }, 00:06:53.321 { 00:06:53.321 "method": "bdev_raid_set_options", 00:06:53.321 "params": { 00:06:53.321 "process_window_size_kb": 1024, 00:06:53.321 "process_max_bandwidth_mb_sec": 0 00:06:53.321 } 00:06:53.321 }, 00:06:53.321 { 00:06:53.322 "method": "bdev_iscsi_set_options", 00:06:53.322 "params": { 00:06:53.322 "timeout_sec": 30 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "bdev_nvme_set_options", 00:06:53.322 "params": { 00:06:53.322 "action_on_timeout": "none", 00:06:53.322 "timeout_us": 0, 00:06:53.322 "timeout_admin_us": 0, 00:06:53.322 "keep_alive_timeout_ms": 10000, 00:06:53.322 "arbitration_burst": 0, 00:06:53.322 "low_priority_weight": 0, 00:06:53.322 "medium_priority_weight": 0, 00:06:53.322 "high_priority_weight": 0, 00:06:53.322 "nvme_adminq_poll_period_us": 10000, 00:06:53.322 "nvme_ioq_poll_period_us": 0, 00:06:53.322 "io_queue_requests": 0, 00:06:53.322 "delay_cmd_submit": true, 00:06:53.322 "transport_retry_count": 4, 00:06:53.322 "bdev_retry_count": 3, 00:06:53.322 "transport_ack_timeout": 0, 00:06:53.322 "ctrlr_loss_timeout_sec": 0, 00:06:53.322 "reconnect_delay_sec": 0, 00:06:53.322 "fast_io_fail_timeout_sec": 0, 00:06:53.322 "disable_auto_failback": false, 00:06:53.322 "generate_uuids": false, 00:06:53.322 "transport_tos": 0, 00:06:53.322 "nvme_error_stat": false, 00:06:53.322 "rdma_srq_size": 0, 00:06:53.322 "io_path_stat": false, 00:06:53.322 "allow_accel_sequence": false, 00:06:53.322 "rdma_max_cq_size": 0, 00:06:53.322 "rdma_cm_event_timeout_ms": 0, 00:06:53.322 "dhchap_digests": [ 00:06:53.322 "sha256", 00:06:53.322 "sha384", 00:06:53.322 "sha512" 00:06:53.322 ], 00:06:53.322 "dhchap_dhgroups": [ 00:06:53.322 "null", 00:06:53.322 "ffdhe2048", 00:06:53.322 "ffdhe3072", 00:06:53.322 "ffdhe4096", 00:06:53.322 "ffdhe6144", 00:06:53.322 "ffdhe8192" 00:06:53.322 ] 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "bdev_nvme_set_hotplug", 00:06:53.322 "params": { 00:06:53.322 "period_us": 100000, 00:06:53.322 "enable": false 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "bdev_wait_for_examine" 00:06:53.322 } 00:06:53.322 ] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "scsi", 00:06:53.322 "config": null 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "scheduler", 00:06:53.322 "config": [ 00:06:53.322 { 00:06:53.322 "method": "framework_set_scheduler", 00:06:53.322 "params": { 00:06:53.322 "name": "static" 00:06:53.322 } 00:06:53.322 } 00:06:53.322 ] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "vhost_scsi", 00:06:53.322 "config": [] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "vhost_blk", 00:06:53.322 "config": [] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "ublk", 00:06:53.322 "config": [] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "nbd", 00:06:53.322 "config": [] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "nvmf", 00:06:53.322 "config": [ 00:06:53.322 { 00:06:53.322 "method": "nvmf_set_config", 00:06:53.322 "params": { 00:06:53.322 "discovery_filter": "match_any", 00:06:53.322 "admin_cmd_passthru": { 00:06:53.322 "identify_ctrlr": false 00:06:53.322 }, 00:06:53.322 "dhchap_digests": [ 00:06:53.322 "sha256", 00:06:53.322 "sha384", 00:06:53.322 "sha512" 00:06:53.322 ], 00:06:53.322 "dhchap_dhgroups": [ 00:06:53.322 "null", 00:06:53.322 "ffdhe2048", 00:06:53.322 "ffdhe3072", 00:06:53.322 "ffdhe4096", 00:06:53.322 "ffdhe6144", 00:06:53.322 "ffdhe8192" 00:06:53.322 ] 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "nvmf_set_max_subsystems", 00:06:53.322 "params": { 00:06:53.322 "max_subsystems": 1024 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "nvmf_set_crdt", 00:06:53.322 "params": { 00:06:53.322 "crdt1": 0, 00:06:53.322 "crdt2": 0, 00:06:53.322 "crdt3": 0 00:06:53.322 } 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "method": "nvmf_create_transport", 00:06:53.322 "params": { 00:06:53.322 "trtype": "TCP", 00:06:53.322 "max_queue_depth": 128, 00:06:53.322 "max_io_qpairs_per_ctrlr": 127, 00:06:53.322 "in_capsule_data_size": 4096, 00:06:53.322 "max_io_size": 131072, 00:06:53.322 "io_unit_size": 131072, 00:06:53.322 "max_aq_depth": 128, 00:06:53.322 "num_shared_buffers": 511, 00:06:53.322 "buf_cache_size": 4294967295, 00:06:53.322 "dif_insert_or_strip": false, 00:06:53.322 "zcopy": false, 00:06:53.322 "c2h_success": true, 00:06:53.322 "sock_priority": 0, 00:06:53.322 "abort_timeout_sec": 1, 00:06:53.322 "ack_timeout": 0, 00:06:53.322 "data_wr_pool_size": 0 00:06:53.322 } 00:06:53.322 } 00:06:53.322 ] 00:06:53.322 }, 00:06:53.322 { 00:06:53.322 "subsystem": "iscsi", 00:06:53.322 "config": [ 00:06:53.322 { 00:06:53.322 "method": "iscsi_set_options", 00:06:53.322 "params": { 00:06:53.322 "node_base": "iqn.2016-06.io.spdk", 00:06:53.322 "max_sessions": 128, 00:06:53.322 "max_connections_per_session": 2, 00:06:53.322 "max_queue_depth": 64, 00:06:53.322 "default_time2wait": 2, 00:06:53.322 "default_time2retain": 20, 00:06:53.322 "first_burst_length": 8192, 00:06:53.322 "immediate_data": true, 00:06:53.322 "allow_duplicated_isid": false, 00:06:53.322 "error_recovery_level": 0, 00:06:53.322 "nop_timeout": 60, 00:06:53.322 "nop_in_interval": 30, 00:06:53.322 "disable_chap": false, 00:06:53.322 "require_chap": false, 00:06:53.322 "mutual_chap": false, 00:06:53.322 "chap_group": 0, 00:06:53.322 "max_large_datain_per_connection": 64, 00:06:53.322 "max_r2t_per_connection": 4, 00:06:53.322 "pdu_pool_size": 36864, 00:06:53.322 "immediate_data_pool_size": 16384, 00:06:53.322 "data_out_pool_size": 2048 00:06:53.322 } 00:06:53.322 } 00:06:53.322 ] 00:06:53.322 } 00:06:53.322 ] 00:06:53.322 } 00:06:53.322 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:53.322 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2689545 00:06:53.322 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2689545 ']' 00:06:53.322 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2689545 00:06:53.322 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689545 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689545' 00:06:53.582 killing process with pid 2689545 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2689545 00:06:53.582 12:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2689545 00:06:53.841 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:53.841 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2689650 00:06:53.841 12:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2689650 ']' 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689650' 00:06:59.112 killing process with pid 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2689650 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:59.112 00:06:59.112 real 0m6.404s 00:06:59.112 user 0m6.080s 00:06:59.112 sys 0m0.686s 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 END TEST skip_rpc_with_json 00:06:59.112 ************************************ 00:06:59.112 12:21:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:59.112 12:21:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.112 12:21:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.112 12:21:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 START TEST skip_rpc_with_delay 00:06:59.112 ************************************ 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:59.112 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:59.372 [2024-11-20 12:21:04.937052] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.372 00:06:59.372 real 0m0.162s 00:06:59.372 user 0m0.110s 00:06:59.372 sys 0m0.050s 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.372 12:21:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 ************************************ 00:06:59.372 END TEST skip_rpc_with_delay 00:06:59.372 ************************************ 00:06:59.372 12:21:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:59.372 12:21:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:59.372 12:21:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:59.372 12:21:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.372 12:21:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.372 12:21:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 ************************************ 00:06:59.372 START TEST exit_on_failed_rpc_init 00:06:59.372 ************************************ 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2690292 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2690292 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2690292 ']' 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.372 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 [2024-11-20 12:21:05.083172] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:59.372 [2024-11-20 12:21:05.083273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690292 ] 00:06:59.631 [2024-11-20 12:21:05.154411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.631 [2024-11-20 12:21:05.219025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.890 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.891 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.891 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:59.891 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:59.891 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.891 [2024-11-20 12:21:05.536903] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:59.891 [2024-11-20 12:21:05.537008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690358 ] 00:06:59.891 [2024-11-20 12:21:05.608888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.149 [2024-11-20 12:21:05.672467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.149 [2024-11-20 12:21:05.672580] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:00.149 [2024-11-20 12:21:05.672602] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:00.149 [2024-11-20 12:21:05.672616] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:00.149 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2690292 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2690292 ']' 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2690292 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2690292 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2690292' 00:07:00.150 killing process with pid 2690292 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2690292 00:07:00.150 12:21:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2690292 00:07:00.409 00:07:00.409 real 0m1.069s 00:07:00.409 user 0m1.280s 00:07:00.409 sys 0m0.412s 00:07:00.409 12:21:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.409 12:21:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:00.409 ************************************ 00:07:00.409 END TEST exit_on_failed_rpc_init 00:07:00.409 ************************************ 00:07:00.409 12:21:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:00.409 00:07:00.409 real 0m13.334s 00:07:00.409 user 0m12.721s 00:07:00.409 sys 0m1.642s 00:07:00.409 12:21:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.409 12:21:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.409 ************************************ 00:07:00.409 END TEST skip_rpc 00:07:00.409 ************************************ 00:07:00.409 12:21:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:00.409 12:21:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.409 12:21:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.409 12:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:00.409 ************************************ 00:07:00.409 START TEST rpc_client 00:07:00.409 ************************************ 00:07:00.409 12:21:06 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:00.669 * Looking for test storage... 00:07:00.669 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.669 12:21:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.669 --rc genhtml_branch_coverage=1 00:07:00.669 --rc genhtml_function_coverage=1 00:07:00.669 --rc genhtml_legend=1 00:07:00.669 --rc geninfo_all_blocks=1 00:07:00.669 --rc geninfo_unexecuted_blocks=1 00:07:00.669 00:07:00.669 ' 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.669 --rc genhtml_branch_coverage=1 00:07:00.669 --rc genhtml_function_coverage=1 00:07:00.669 --rc genhtml_legend=1 00:07:00.669 --rc geninfo_all_blocks=1 00:07:00.669 --rc geninfo_unexecuted_blocks=1 00:07:00.669 00:07:00.669 ' 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.669 --rc genhtml_branch_coverage=1 00:07:00.669 --rc genhtml_function_coverage=1 00:07:00.669 --rc genhtml_legend=1 00:07:00.669 --rc geninfo_all_blocks=1 00:07:00.669 --rc geninfo_unexecuted_blocks=1 00:07:00.669 00:07:00.669 ' 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.669 --rc genhtml_branch_coverage=1 00:07:00.669 --rc genhtml_function_coverage=1 00:07:00.669 --rc genhtml_legend=1 00:07:00.669 --rc geninfo_all_blocks=1 00:07:00.669 --rc geninfo_unexecuted_blocks=1 00:07:00.669 00:07:00.669 ' 00:07:00.669 12:21:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:00.669 OK 00:07:00.669 12:21:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:00.669 00:07:00.669 real 0m0.171s 00:07:00.669 user 0m0.116s 00:07:00.669 sys 0m0.063s 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.669 12:21:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:00.669 ************************************ 00:07:00.669 END TEST rpc_client 00:07:00.669 ************************************ 00:07:00.669 12:21:06 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:00.669 12:21:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.669 12:21:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.669 12:21:06 -- common/autotest_common.sh@10 -- # set +x 00:07:00.669 ************************************ 00:07:00.669 START TEST json_config 00:07:00.669 ************************************ 00:07:00.669 12:21:06 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:00.669 12:21:06 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.669 12:21:06 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.669 12:21:06 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.929 12:21:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.929 12:21:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.929 12:21:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.929 12:21:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.929 12:21:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.929 12:21:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:00.929 12:21:06 json_config -- scripts/common.sh@345 -- # : 1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.929 12:21:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.929 12:21:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@353 -- # local d=1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.929 12:21:06 json_config -- scripts/common.sh@355 -- # echo 1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.929 12:21:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@353 -- # local d=2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.929 12:21:06 json_config -- scripts/common.sh@355 -- # echo 2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.929 12:21:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.929 12:21:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.929 12:21:06 json_config -- scripts/common.sh@368 -- # return 0 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.929 --rc genhtml_branch_coverage=1 00:07:00.929 --rc genhtml_function_coverage=1 00:07:00.929 --rc genhtml_legend=1 00:07:00.929 --rc geninfo_all_blocks=1 00:07:00.929 --rc geninfo_unexecuted_blocks=1 00:07:00.929 00:07:00.929 ' 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.929 --rc genhtml_branch_coverage=1 00:07:00.929 --rc genhtml_function_coverage=1 00:07:00.929 --rc genhtml_legend=1 00:07:00.929 --rc geninfo_all_blocks=1 00:07:00.929 --rc geninfo_unexecuted_blocks=1 00:07:00.929 00:07:00.929 ' 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.929 --rc genhtml_branch_coverage=1 00:07:00.929 --rc genhtml_function_coverage=1 00:07:00.929 --rc genhtml_legend=1 00:07:00.929 --rc geninfo_all_blocks=1 00:07:00.929 --rc geninfo_unexecuted_blocks=1 00:07:00.929 00:07:00.929 ' 00:07:00.929 12:21:06 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.929 --rc genhtml_branch_coverage=1 00:07:00.929 --rc genhtml_function_coverage=1 00:07:00.929 --rc genhtml_legend=1 00:07:00.929 --rc geninfo_all_blocks=1 00:07:00.929 --rc geninfo_unexecuted_blocks=1 00:07:00.929 00:07:00.929 ' 00:07:00.929 12:21:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.929 12:21:06 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:00.929 12:21:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.929 12:21:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.929 12:21:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.929 12:21:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.929 12:21:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.929 12:21:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.930 12:21:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.930 12:21:06 json_config -- paths/export.sh@5 -- # export PATH 00:07:00.930 12:21:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@51 -- # : 0 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.930 12:21:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:00.930 INFO: JSON configuration test init 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 12:21:06 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:00.930 12:21:06 json_config -- json_config/common.sh@9 -- # local app=target 00:07:00.930 12:21:06 json_config -- json_config/common.sh@10 -- # shift 00:07:00.930 12:21:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.930 12:21:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.930 12:21:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.930 12:21:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.930 12:21:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.930 12:21:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2690765 00:07:00.930 12:21:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.930 Waiting for target to run... 00:07:00.930 12:21:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:00.930 12:21:06 json_config -- json_config/common.sh@25 -- # waitforlisten 2690765 /var/tmp/spdk_tgt.sock 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 2690765 ']' 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.930 12:21:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 [2024-11-20 12:21:06.580992] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:00.930 [2024-11-20 12:21:06.581103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690765 ] 00:07:01.497 [2024-11-20 12:21:06.965568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.497 [2024-11-20 12:21:07.018197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:02.062 12:21:07 json_config -- json_config/common.sh@26 -- # echo '' 00:07:02.062 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.062 12:21:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:02.062 12:21:07 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:02.062 12:21:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:05.431 12:21:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.431 12:21:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:05.431 12:21:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:05.431 12:21:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@54 -- # sort 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:07:05.689 12:21:11 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.689 12:21:11 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.689 12:21:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@320 -- # e810=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@321 -- # x722=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@322 -- # mlx=() 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:07:07.594 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:07:07.594 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:07:07.594 Found net devices under 0000:83:00.0: mlx_0_0 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:07:07.594 Found net devices under 0000:83:00.1: mlx_0_1 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:07.594 12:21:13 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@62 -- # uname 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@78 -- # ip= 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_0 up 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:07.595 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:07.595 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:07:07.595 altname enp131s0f0np0 00:07:07.595 inet 192.168.100.8/24 scope global mlx_0_0 00:07:07.595 valid_lft forever preferred_lft forever 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@78 -- # ip= 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_1 up 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:07.595 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:07.595 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:07:07.595 altname enp131s0f1np1 00:07:07.595 inet 192.168.100.9/24 scope global mlx_0_1 00:07:07.595 valid_lft forever preferred_lft forever 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@450 -- # return 0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@109 -- # continue 2 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:07.595 192.168.100.9' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@485 -- # head -n 1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:07.595 192.168.100.9' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:07.595 192.168.100.9' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@486 -- # head -n 1 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:07.595 12:21:13 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:07.595 12:21:13 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:07:07.595 12:21:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:07.595 12:21:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:08.163 MallocForNvmf0 00:07:08.163 12:21:13 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:08.163 12:21:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:08.422 MallocForNvmf1 00:07:08.422 12:21:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:08.422 12:21:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:08.681 [2024-11-20 12:21:14.298283] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:08.681 [2024-11-20 12:21:14.332231] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1000a10/0xed5520) succeed. 00:07:08.681 [2024-11-20 12:21:14.349283] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1000b90/0xf55200) succeed. 00:07:08.681 12:21:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:08.681 12:21:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.247 12:21:14 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:09.247 12:21:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:09.505 12:21:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:09.505 12:21:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:09.764 12:21:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:09.764 12:21:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:10.022 [2024-11-20 12:21:15.705806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:10.022 12:21:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:10.022 12:21:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.022 12:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.022 12:21:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:10.022 12:21:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.022 12:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.022 12:21:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:10.022 12:21:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.022 12:21:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.589 MallocBdevForConfigChangeCheck 00:07:10.589 12:21:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:10.589 12:21:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.589 12:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.589 12:21:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:10.589 12:21:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:10.847 12:21:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:10.847 INFO: shutting down applications... 00:07:10.847 12:21:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:10.847 12:21:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:10.847 12:21:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:10.847 12:21:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:12.748 Calling clear_iscsi_subsystem 00:07:12.748 Calling clear_nvmf_subsystem 00:07:12.748 Calling clear_nbd_subsystem 00:07:12.748 Calling clear_ublk_subsystem 00:07:12.748 Calling clear_vhost_blk_subsystem 00:07:12.748 Calling clear_vhost_scsi_subsystem 00:07:12.748 Calling clear_bdev_subsystem 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:12.748 12:21:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:13.006 12:21:18 json_config -- json_config/json_config.sh@352 -- # break 00:07:13.006 12:21:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:13.006 12:21:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:13.006 12:21:18 json_config -- json_config/common.sh@31 -- # local app=target 00:07:13.006 12:21:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:13.006 12:21:18 json_config -- json_config/common.sh@35 -- # [[ -n 2690765 ]] 00:07:13.006 12:21:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2690765 00:07:13.006 12:21:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:13.006 12:21:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.007 12:21:18 json_config -- json_config/common.sh@41 -- # kill -0 2690765 00:07:13.007 12:21:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:13.580 12:21:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:13.580 12:21:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.580 12:21:19 json_config -- json_config/common.sh@41 -- # kill -0 2690765 00:07:13.580 12:21:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:13.580 12:21:19 json_config -- json_config/common.sh@43 -- # break 00:07:13.580 12:21:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:13.580 12:21:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:13.580 SPDK target shutdown done 00:07:13.580 12:21:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:13.580 INFO: relaunching applications... 00:07:13.580 12:21:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.580 12:21:19 json_config -- json_config/common.sh@9 -- # local app=target 00:07:13.580 12:21:19 json_config -- json_config/common.sh@10 -- # shift 00:07:13.580 12:21:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:13.580 12:21:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:13.580 12:21:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:13.580 12:21:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.580 12:21:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.580 12:21:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2693168 00:07:13.580 12:21:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.580 12:21:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:13.580 Waiting for target to run... 00:07:13.580 12:21:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2693168 /var/tmp/spdk_tgt.sock 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 2693168 ']' 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:13.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.580 12:21:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.580 [2024-11-20 12:21:19.321337] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:13.580 [2024-11-20 12:21:19.321444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693168 ] 00:07:14.150 [2024-11-20 12:21:19.688110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.150 [2024-11-20 12:21:19.741224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.456 [2024-11-20 12:21:22.810695] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20742a0/0x207f000) succeed. 00:07:17.456 [2024-11-20 12:21:22.826200] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20774f0/0x20ff040) succeed. 00:07:17.456 [2024-11-20 12:21:22.884703] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:17.456 12:21:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.456 12:21:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:17.456 12:21:22 json_config -- json_config/common.sh@26 -- # echo '' 00:07:17.456 00:07:17.456 12:21:22 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:17.456 12:21:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:17.456 INFO: Checking if target configuration is the same... 00:07:17.456 12:21:22 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:17.456 12:21:22 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:17.456 12:21:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:17.456 + '[' 2 -ne 2 ']' 00:07:17.456 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:17.456 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:17.456 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:17.456 +++ basename /dev/fd/62 00:07:17.456 ++ mktemp /tmp/62.XXX 00:07:17.456 + tmp_file_1=/tmp/62.2l1 00:07:17.456 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:17.456 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:17.456 + tmp_file_2=/tmp/spdk_tgt_config.json.u1F 00:07:17.456 + ret=0 00:07:17.456 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:17.715 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:17.715 + diff -u /tmp/62.2l1 /tmp/spdk_tgt_config.json.u1F 00:07:17.715 + echo 'INFO: JSON config files are the same' 00:07:17.715 INFO: JSON config files are the same 00:07:17.715 + rm /tmp/62.2l1 /tmp/spdk_tgt_config.json.u1F 00:07:17.715 + exit 0 00:07:17.715 12:21:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:17.715 12:21:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:17.715 INFO: changing configuration and checking if this can be detected... 00:07:17.715 12:21:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:17.715 12:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:18.281 12:21:23 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.281 12:21:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:18.281 12:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.281 + '[' 2 -ne 2 ']' 00:07:18.281 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:18.281 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:18.281 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:18.281 +++ basename /dev/fd/62 00:07:18.281 ++ mktemp /tmp/62.XXX 00:07:18.281 + tmp_file_1=/tmp/62.OuT 00:07:18.281 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.281 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:18.281 + tmp_file_2=/tmp/spdk_tgt_config.json.zLO 00:07:18.281 + ret=0 00:07:18.281 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.539 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.539 + diff -u /tmp/62.OuT /tmp/spdk_tgt_config.json.zLO 00:07:18.539 + ret=1 00:07:18.539 + echo '=== Start of file: /tmp/62.OuT ===' 00:07:18.539 + cat /tmp/62.OuT 00:07:18.797 + echo '=== End of file: /tmp/62.OuT ===' 00:07:18.797 + echo '' 00:07:18.797 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zLO ===' 00:07:18.797 + cat /tmp/spdk_tgt_config.json.zLO 00:07:18.797 + echo '=== End of file: /tmp/spdk_tgt_config.json.zLO ===' 00:07:18.797 + echo '' 00:07:18.797 + rm /tmp/62.OuT /tmp/spdk_tgt_config.json.zLO 00:07:18.797 + exit 1 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:18.797 INFO: configuration change detected. 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:18.797 12:21:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.797 12:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 2693168 ]] 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:18.797 12:21:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.798 12:21:24 json_config -- json_config/json_config.sh@330 -- # killprocess 2693168 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@954 -- # '[' -z 2693168 ']' 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@958 -- # kill -0 2693168 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@959 -- # uname 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2693168 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2693168' 00:07:18.798 killing process with pid 2693168 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@973 -- # kill 2693168 00:07:18.798 12:21:24 json_config -- common/autotest_common.sh@978 -- # wait 2693168 00:07:20.697 12:21:26 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:20.697 12:21:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:20.697 12:21:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.698 12:21:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.698 12:21:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:20.698 12:21:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:20.698 INFO: Success 00:07:20.698 12:21:26 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@121 -- # sync 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.698 12:21:26 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:07:20.698 00:07:20.698 real 0m19.696s 00:07:20.698 user 0m23.211s 00:07:20.698 sys 0m4.136s 00:07:20.698 12:21:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.698 12:21:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.698 ************************************ 00:07:20.698 END TEST json_config 00:07:20.698 ************************************ 00:07:20.698 12:21:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:20.698 12:21:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.698 12:21:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.698 12:21:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.698 ************************************ 00:07:20.698 START TEST json_config_extra_key 00:07:20.698 ************************************ 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 12:21:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.698 12:21:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.698 12:21:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 12:21:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 12:21:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 12:21:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:20.698 12:21:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.698 12:21:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.699 12:21:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.699 12:21:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:20.699 INFO: launching applications... 00:07:20.699 12:21:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2693873 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:20.699 Waiting for target to run... 00:07:20.699 12:21:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2693873 /var/tmp/spdk_tgt.sock 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2693873 ']' 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:20.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.699 12:21:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:20.699 [2024-11-20 12:21:26.319432] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:20.699 [2024-11-20 12:21:26.319555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693873 ] 00:07:20.958 [2024-11-20 12:21:26.681833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.216 [2024-11-20 12:21:26.735138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.782 12:21:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.782 12:21:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:21.782 00:07:21.782 12:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:21.782 INFO: shutting down applications... 00:07:21.782 12:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2693873 ]] 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2693873 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2693873 00:07:21.782 12:21:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2693873 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:22.351 12:21:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:22.351 SPDK target shutdown done 00:07:22.351 12:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:22.351 Success 00:07:22.351 00:07:22.351 real 0m1.840s 00:07:22.351 user 0m1.803s 00:07:22.351 sys 0m0.483s 00:07:22.351 12:21:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.351 12:21:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 ************************************ 00:07:22.351 END TEST json_config_extra_key 00:07:22.351 ************************************ 00:07:22.351 12:21:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.351 12:21:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.351 12:21:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.351 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 ************************************ 00:07:22.351 START TEST alias_rpc 00:07:22.351 ************************************ 00:07:22.351 12:21:27 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.351 * Looking for test storage... 00:07:22.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:22.351 12:21:28 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.351 12:21:28 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.351 12:21:28 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.610 12:21:28 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.610 12:21:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:22.610 12:21:28 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.610 12:21:28 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.610 --rc genhtml_branch_coverage=1 00:07:22.610 --rc genhtml_function_coverage=1 00:07:22.610 --rc genhtml_legend=1 00:07:22.611 --rc geninfo_all_blocks=1 00:07:22.611 --rc geninfo_unexecuted_blocks=1 00:07:22.611 00:07:22.611 ' 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.611 --rc genhtml_branch_coverage=1 00:07:22.611 --rc genhtml_function_coverage=1 00:07:22.611 --rc genhtml_legend=1 00:07:22.611 --rc geninfo_all_blocks=1 00:07:22.611 --rc geninfo_unexecuted_blocks=1 00:07:22.611 00:07:22.611 ' 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.611 --rc genhtml_branch_coverage=1 00:07:22.611 --rc genhtml_function_coverage=1 00:07:22.611 --rc genhtml_legend=1 00:07:22.611 --rc geninfo_all_blocks=1 00:07:22.611 --rc geninfo_unexecuted_blocks=1 00:07:22.611 00:07:22.611 ' 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.611 --rc genhtml_branch_coverage=1 00:07:22.611 --rc genhtml_function_coverage=1 00:07:22.611 --rc genhtml_legend=1 00:07:22.611 --rc geninfo_all_blocks=1 00:07:22.611 --rc geninfo_unexecuted_blocks=1 00:07:22.611 00:07:22.611 ' 00:07:22.611 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.611 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2694113 00:07:22.611 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.611 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2694113 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2694113 ']' 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.611 12:21:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.611 [2024-11-20 12:21:28.206966] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:22.611 [2024-11-20 12:21:28.207077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694113 ] 00:07:22.611 [2024-11-20 12:21:28.279720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.611 [2024-11-20 12:21:28.343019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.869 12:21:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.869 12:21:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.869 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:23.436 12:21:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2694113 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2694113 ']' 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2694113 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694113 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694113' 00:07:23.436 killing process with pid 2694113 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 2694113 00:07:23.436 12:21:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 2694113 00:07:23.695 00:07:23.695 real 0m1.337s 00:07:23.695 user 0m1.567s 00:07:23.695 sys 0m0.439s 00:07:23.695 12:21:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.695 12:21:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 END TEST alias_rpc 00:07:23.695 ************************************ 00:07:23.695 12:21:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:23.695 12:21:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:23.695 12:21:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.695 12:21:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.695 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 START TEST spdkcli_tcp 00:07:23.695 ************************************ 00:07:23.695 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:23.695 * Looking for test storage... 00:07:23.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:23.695 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.695 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.695 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.953 12:21:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.953 --rc genhtml_branch_coverage=1 00:07:23.953 --rc genhtml_function_coverage=1 00:07:23.953 --rc genhtml_legend=1 00:07:23.953 --rc geninfo_all_blocks=1 00:07:23.953 --rc geninfo_unexecuted_blocks=1 00:07:23.953 00:07:23.953 ' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.953 --rc genhtml_branch_coverage=1 00:07:23.953 --rc genhtml_function_coverage=1 00:07:23.953 --rc genhtml_legend=1 00:07:23.953 --rc geninfo_all_blocks=1 00:07:23.953 --rc geninfo_unexecuted_blocks=1 00:07:23.953 00:07:23.953 ' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.953 --rc genhtml_branch_coverage=1 00:07:23.953 --rc genhtml_function_coverage=1 00:07:23.953 --rc genhtml_legend=1 00:07:23.953 --rc geninfo_all_blocks=1 00:07:23.953 --rc geninfo_unexecuted_blocks=1 00:07:23.953 00:07:23.953 ' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.953 --rc genhtml_branch_coverage=1 00:07:23.953 --rc genhtml_function_coverage=1 00:07:23.953 --rc genhtml_legend=1 00:07:23.953 --rc geninfo_all_blocks=1 00:07:23.953 --rc geninfo_unexecuted_blocks=1 00:07:23.953 00:07:23.953 ' 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2694274 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:23.953 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2694274 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2694274 ']' 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.953 12:21:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.954 [2024-11-20 12:21:29.587496] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:23.954 [2024-11-20 12:21:29.587606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694274 ] 00:07:23.954 [2024-11-20 12:21:29.661206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.211 [2024-11-20 12:21:29.729557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.211 [2024-11-20 12:21:29.729592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.469 12:21:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.469 12:21:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:24.469 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2694327 00:07:24.469 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:24.469 12:21:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:24.728 [ 00:07:24.728 "bdev_malloc_delete", 00:07:24.728 "bdev_malloc_create", 00:07:24.728 "bdev_null_resize", 00:07:24.728 "bdev_null_delete", 00:07:24.728 "bdev_null_create", 00:07:24.728 "bdev_nvme_cuse_unregister", 00:07:24.728 "bdev_nvme_cuse_register", 00:07:24.728 "bdev_opal_new_user", 00:07:24.728 "bdev_opal_set_lock_state", 00:07:24.728 "bdev_opal_delete", 00:07:24.728 "bdev_opal_get_info", 00:07:24.728 "bdev_opal_create", 00:07:24.728 "bdev_nvme_opal_revert", 00:07:24.728 "bdev_nvme_opal_init", 00:07:24.728 "bdev_nvme_send_cmd", 00:07:24.728 "bdev_nvme_set_keys", 00:07:24.728 "bdev_nvme_get_path_iostat", 00:07:24.728 "bdev_nvme_get_mdns_discovery_info", 00:07:24.728 "bdev_nvme_stop_mdns_discovery", 00:07:24.728 "bdev_nvme_start_mdns_discovery", 00:07:24.728 "bdev_nvme_set_multipath_policy", 00:07:24.728 "bdev_nvme_set_preferred_path", 00:07:24.728 "bdev_nvme_get_io_paths", 00:07:24.728 "bdev_nvme_remove_error_injection", 00:07:24.728 "bdev_nvme_add_error_injection", 00:07:24.728 "bdev_nvme_get_discovery_info", 00:07:24.728 "bdev_nvme_stop_discovery", 00:07:24.728 "bdev_nvme_start_discovery", 00:07:24.728 "bdev_nvme_get_controller_health_info", 00:07:24.728 "bdev_nvme_disable_controller", 00:07:24.728 "bdev_nvme_enable_controller", 00:07:24.728 "bdev_nvme_reset_controller", 00:07:24.728 "bdev_nvme_get_transport_statistics", 00:07:24.728 "bdev_nvme_apply_firmware", 00:07:24.728 "bdev_nvme_detach_controller", 00:07:24.728 "bdev_nvme_get_controllers", 00:07:24.728 "bdev_nvme_attach_controller", 00:07:24.728 "bdev_nvme_set_hotplug", 00:07:24.728 "bdev_nvme_set_options", 00:07:24.728 "bdev_passthru_delete", 00:07:24.728 "bdev_passthru_create", 00:07:24.728 "bdev_lvol_set_parent_bdev", 00:07:24.728 "bdev_lvol_set_parent", 00:07:24.728 "bdev_lvol_check_shallow_copy", 00:07:24.728 "bdev_lvol_start_shallow_copy", 00:07:24.728 "bdev_lvol_grow_lvstore", 00:07:24.728 "bdev_lvol_get_lvols", 00:07:24.728 "bdev_lvol_get_lvstores", 00:07:24.728 "bdev_lvol_delete", 00:07:24.728 "bdev_lvol_set_read_only", 00:07:24.728 "bdev_lvol_resize", 00:07:24.728 "bdev_lvol_decouple_parent", 00:07:24.728 "bdev_lvol_inflate", 00:07:24.728 "bdev_lvol_rename", 00:07:24.728 "bdev_lvol_clone_bdev", 00:07:24.728 "bdev_lvol_clone", 00:07:24.728 "bdev_lvol_snapshot", 00:07:24.728 "bdev_lvol_create", 00:07:24.728 "bdev_lvol_delete_lvstore", 00:07:24.728 "bdev_lvol_rename_lvstore", 00:07:24.728 "bdev_lvol_create_lvstore", 00:07:24.728 "bdev_raid_set_options", 00:07:24.728 "bdev_raid_remove_base_bdev", 00:07:24.728 "bdev_raid_add_base_bdev", 00:07:24.728 "bdev_raid_delete", 00:07:24.728 "bdev_raid_create", 00:07:24.728 "bdev_raid_get_bdevs", 00:07:24.728 "bdev_error_inject_error", 00:07:24.728 "bdev_error_delete", 00:07:24.728 "bdev_error_create", 00:07:24.728 "bdev_split_delete", 00:07:24.728 "bdev_split_create", 00:07:24.728 "bdev_delay_delete", 00:07:24.728 "bdev_delay_create", 00:07:24.728 "bdev_delay_update_latency", 00:07:24.728 "bdev_zone_block_delete", 00:07:24.728 "bdev_zone_block_create", 00:07:24.728 "blobfs_create", 00:07:24.728 "blobfs_detect", 00:07:24.728 "blobfs_set_cache_size", 00:07:24.728 "bdev_aio_delete", 00:07:24.728 "bdev_aio_rescan", 00:07:24.728 "bdev_aio_create", 00:07:24.728 "bdev_ftl_set_property", 00:07:24.728 "bdev_ftl_get_properties", 00:07:24.728 "bdev_ftl_get_stats", 00:07:24.728 "bdev_ftl_unmap", 00:07:24.728 "bdev_ftl_unload", 00:07:24.728 "bdev_ftl_delete", 00:07:24.728 "bdev_ftl_load", 00:07:24.728 "bdev_ftl_create", 00:07:24.728 "bdev_virtio_attach_controller", 00:07:24.728 "bdev_virtio_scsi_get_devices", 00:07:24.728 "bdev_virtio_detach_controller", 00:07:24.728 "bdev_virtio_blk_set_hotplug", 00:07:24.728 "bdev_iscsi_delete", 00:07:24.728 "bdev_iscsi_create", 00:07:24.728 "bdev_iscsi_set_options", 00:07:24.728 "accel_error_inject_error", 00:07:24.728 "ioat_scan_accel_module", 00:07:24.728 "dsa_scan_accel_module", 00:07:24.728 "iaa_scan_accel_module", 00:07:24.728 "keyring_file_remove_key", 00:07:24.728 "keyring_file_add_key", 00:07:24.728 "keyring_linux_set_options", 00:07:24.728 "fsdev_aio_delete", 00:07:24.728 "fsdev_aio_create", 00:07:24.728 "iscsi_get_histogram", 00:07:24.728 "iscsi_enable_histogram", 00:07:24.728 "iscsi_set_options", 00:07:24.728 "iscsi_get_auth_groups", 00:07:24.728 "iscsi_auth_group_remove_secret", 00:07:24.728 "iscsi_auth_group_add_secret", 00:07:24.729 "iscsi_delete_auth_group", 00:07:24.729 "iscsi_create_auth_group", 00:07:24.729 "iscsi_set_discovery_auth", 00:07:24.729 "iscsi_get_options", 00:07:24.729 "iscsi_target_node_request_logout", 00:07:24.729 "iscsi_target_node_set_redirect", 00:07:24.729 "iscsi_target_node_set_auth", 00:07:24.729 "iscsi_target_node_add_lun", 00:07:24.729 "iscsi_get_stats", 00:07:24.729 "iscsi_get_connections", 00:07:24.729 "iscsi_portal_group_set_auth", 00:07:24.729 "iscsi_start_portal_group", 00:07:24.729 "iscsi_delete_portal_group", 00:07:24.729 "iscsi_create_portal_group", 00:07:24.729 "iscsi_get_portal_groups", 00:07:24.729 "iscsi_delete_target_node", 00:07:24.729 "iscsi_target_node_remove_pg_ig_maps", 00:07:24.729 "iscsi_target_node_add_pg_ig_maps", 00:07:24.729 "iscsi_create_target_node", 00:07:24.729 "iscsi_get_target_nodes", 00:07:24.729 "iscsi_delete_initiator_group", 00:07:24.729 "iscsi_initiator_group_remove_initiators", 00:07:24.729 "iscsi_initiator_group_add_initiators", 00:07:24.729 "iscsi_create_initiator_group", 00:07:24.729 "iscsi_get_initiator_groups", 00:07:24.729 "nvmf_set_crdt", 00:07:24.729 "nvmf_set_config", 00:07:24.729 "nvmf_set_max_subsystems", 00:07:24.729 "nvmf_stop_mdns_prr", 00:07:24.729 "nvmf_publish_mdns_prr", 00:07:24.729 "nvmf_subsystem_get_listeners", 00:07:24.729 "nvmf_subsystem_get_qpairs", 00:07:24.729 "nvmf_subsystem_get_controllers", 00:07:24.729 "nvmf_get_stats", 00:07:24.729 "nvmf_get_transports", 00:07:24.729 "nvmf_create_transport", 00:07:24.729 "nvmf_get_targets", 00:07:24.729 "nvmf_delete_target", 00:07:24.729 "nvmf_create_target", 00:07:24.729 "nvmf_subsystem_allow_any_host", 00:07:24.729 "nvmf_subsystem_set_keys", 00:07:24.729 "nvmf_subsystem_remove_host", 00:07:24.729 "nvmf_subsystem_add_host", 00:07:24.729 "nvmf_ns_remove_host", 00:07:24.729 "nvmf_ns_add_host", 00:07:24.729 "nvmf_subsystem_remove_ns", 00:07:24.729 "nvmf_subsystem_set_ns_ana_group", 00:07:24.729 "nvmf_subsystem_add_ns", 00:07:24.729 "nvmf_subsystem_listener_set_ana_state", 00:07:24.729 "nvmf_discovery_get_referrals", 00:07:24.729 "nvmf_discovery_remove_referral", 00:07:24.729 "nvmf_discovery_add_referral", 00:07:24.729 "nvmf_subsystem_remove_listener", 00:07:24.729 "nvmf_subsystem_add_listener", 00:07:24.729 "nvmf_delete_subsystem", 00:07:24.729 "nvmf_create_subsystem", 00:07:24.729 "nvmf_get_subsystems", 00:07:24.729 "env_dpdk_get_mem_stats", 00:07:24.729 "nbd_get_disks", 00:07:24.729 "nbd_stop_disk", 00:07:24.729 "nbd_start_disk", 00:07:24.729 "ublk_recover_disk", 00:07:24.729 "ublk_get_disks", 00:07:24.729 "ublk_stop_disk", 00:07:24.729 "ublk_start_disk", 00:07:24.729 "ublk_destroy_target", 00:07:24.729 "ublk_create_target", 00:07:24.729 "virtio_blk_create_transport", 00:07:24.729 "virtio_blk_get_transports", 00:07:24.729 "vhost_controller_set_coalescing", 00:07:24.729 "vhost_get_controllers", 00:07:24.729 "vhost_delete_controller", 00:07:24.729 "vhost_create_blk_controller", 00:07:24.729 "vhost_scsi_controller_remove_target", 00:07:24.729 "vhost_scsi_controller_add_target", 00:07:24.729 "vhost_start_scsi_controller", 00:07:24.729 "vhost_create_scsi_controller", 00:07:24.729 "thread_set_cpumask", 00:07:24.729 "scheduler_set_options", 00:07:24.729 "framework_get_governor", 00:07:24.729 "framework_get_scheduler", 00:07:24.729 "framework_set_scheduler", 00:07:24.729 "framework_get_reactors", 00:07:24.729 "thread_get_io_channels", 00:07:24.729 "thread_get_pollers", 00:07:24.729 "thread_get_stats", 00:07:24.729 "framework_monitor_context_switch", 00:07:24.729 "spdk_kill_instance", 00:07:24.729 "log_enable_timestamps", 00:07:24.729 "log_get_flags", 00:07:24.729 "log_clear_flag", 00:07:24.729 "log_set_flag", 00:07:24.729 "log_get_level", 00:07:24.729 "log_set_level", 00:07:24.729 "log_get_print_level", 00:07:24.729 "log_set_print_level", 00:07:24.729 "framework_enable_cpumask_locks", 00:07:24.729 "framework_disable_cpumask_locks", 00:07:24.729 "framework_wait_init", 00:07:24.729 "framework_start_init", 00:07:24.729 "scsi_get_devices", 00:07:24.729 "bdev_get_histogram", 00:07:24.729 "bdev_enable_histogram", 00:07:24.729 "bdev_set_qos_limit", 00:07:24.729 "bdev_set_qd_sampling_period", 00:07:24.729 "bdev_get_bdevs", 00:07:24.729 "bdev_reset_iostat", 00:07:24.729 "bdev_get_iostat", 00:07:24.729 "bdev_examine", 00:07:24.729 "bdev_wait_for_examine", 00:07:24.729 "bdev_set_options", 00:07:24.729 "accel_get_stats", 00:07:24.729 "accel_set_options", 00:07:24.729 "accel_set_driver", 00:07:24.729 "accel_crypto_key_destroy", 00:07:24.729 "accel_crypto_keys_get", 00:07:24.729 "accel_crypto_key_create", 00:07:24.729 "accel_assign_opc", 00:07:24.729 "accel_get_module_info", 00:07:24.729 "accel_get_opc_assignments", 00:07:24.729 "vmd_rescan", 00:07:24.729 "vmd_remove_device", 00:07:24.729 "vmd_enable", 00:07:24.729 "sock_get_default_impl", 00:07:24.729 "sock_set_default_impl", 00:07:24.729 "sock_impl_set_options", 00:07:24.729 "sock_impl_get_options", 00:07:24.729 "iobuf_get_stats", 00:07:24.729 "iobuf_set_options", 00:07:24.729 "keyring_get_keys", 00:07:24.729 "framework_get_pci_devices", 00:07:24.729 "framework_get_config", 00:07:24.729 "framework_get_subsystems", 00:07:24.729 "fsdev_set_opts", 00:07:24.729 "fsdev_get_opts", 00:07:24.729 "trace_get_info", 00:07:24.729 "trace_get_tpoint_group_mask", 00:07:24.729 "trace_disable_tpoint_group", 00:07:24.729 "trace_enable_tpoint_group", 00:07:24.729 "trace_clear_tpoint_mask", 00:07:24.729 "trace_set_tpoint_mask", 00:07:24.729 "notify_get_notifications", 00:07:24.729 "notify_get_types", 00:07:24.729 "spdk_get_version", 00:07:24.729 "rpc_get_methods" 00:07:24.729 ] 00:07:24.729 12:21:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.729 12:21:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:24.729 12:21:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2694274 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2694274 ']' 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2694274 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694274 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694274' 00:07:24.729 killing process with pid 2694274 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2694274 00:07:24.729 12:21:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2694274 00:07:24.989 00:07:24.989 real 0m1.366s 00:07:24.989 user 0m2.457s 00:07:24.989 sys 0m0.484s 00:07:24.989 12:21:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.989 12:21:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.989 ************************************ 00:07:24.989 END TEST spdkcli_tcp 00:07:24.989 ************************************ 00:07:24.989 12:21:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:24.989 12:21:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.989 12:21:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.989 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:24.989 ************************************ 00:07:24.989 START TEST dpdk_mem_utility 00:07:24.989 ************************************ 00:07:24.989 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:25.248 * Looking for test storage... 00:07:25.248 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:25.248 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.248 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.248 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.248 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:25.248 12:21:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.249 12:21:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.249 --rc genhtml_branch_coverage=1 00:07:25.249 --rc genhtml_function_coverage=1 00:07:25.249 --rc genhtml_legend=1 00:07:25.249 --rc geninfo_all_blocks=1 00:07:25.249 --rc geninfo_unexecuted_blocks=1 00:07:25.249 00:07:25.249 ' 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.249 --rc genhtml_branch_coverage=1 00:07:25.249 --rc genhtml_function_coverage=1 00:07:25.249 --rc genhtml_legend=1 00:07:25.249 --rc geninfo_all_blocks=1 00:07:25.249 --rc geninfo_unexecuted_blocks=1 00:07:25.249 00:07:25.249 ' 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.249 --rc genhtml_branch_coverage=1 00:07:25.249 --rc genhtml_function_coverage=1 00:07:25.249 --rc genhtml_legend=1 00:07:25.249 --rc geninfo_all_blocks=1 00:07:25.249 --rc geninfo_unexecuted_blocks=1 00:07:25.249 00:07:25.249 ' 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.249 --rc genhtml_branch_coverage=1 00:07:25.249 --rc genhtml_function_coverage=1 00:07:25.249 --rc genhtml_legend=1 00:07:25.249 --rc geninfo_all_blocks=1 00:07:25.249 --rc geninfo_unexecuted_blocks=1 00:07:25.249 00:07:25.249 ' 00:07:25.249 12:21:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.249 12:21:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2694443 00:07:25.249 12:21:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:25.249 12:21:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2694443 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2694443 ']' 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.249 12:21:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.249 [2024-11-20 12:21:30.978583] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:25.249 [2024-11-20 12:21:30.978693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694443 ] 00:07:25.507 [2024-11-20 12:21:31.052377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.507 [2024-11-20 12:21:31.117808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.766 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.766 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:25.767 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:25.767 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:25.767 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.767 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.767 { 00:07:25.767 "filename": "/tmp/spdk_mem_dump.txt" 00:07:25.767 } 00:07:25.767 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.767 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.767 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:25.767 1 heaps totaling size 810.000000 MiB 00:07:25.767 size: 810.000000 MiB heap id: 0 00:07:25.767 end heaps---------- 00:07:25.767 9 mempools totaling size 595.772034 MiB 00:07:25.767 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:25.767 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:25.767 size: 92.545471 MiB name: bdev_io_2694443 00:07:25.767 size: 50.003479 MiB name: msgpool_2694443 00:07:25.767 size: 36.509338 MiB name: fsdev_io_2694443 00:07:25.767 size: 21.763794 MiB name: PDU_Pool 00:07:25.767 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:25.767 size: 4.133484 MiB name: evtpool_2694443 00:07:25.767 size: 0.026123 MiB name: Session_Pool 00:07:25.767 end mempools------- 00:07:25.767 6 memzones totaling size 4.142822 MiB 00:07:25.767 size: 1.000366 MiB name: RG_ring_0_2694443 00:07:25.767 size: 1.000366 MiB name: RG_ring_1_2694443 00:07:25.767 size: 1.000366 MiB name: RG_ring_4_2694443 00:07:25.767 size: 1.000366 MiB name: RG_ring_5_2694443 00:07:25.767 size: 0.125366 MiB name: RG_ring_2_2694443 00:07:25.767 size: 0.015991 MiB name: RG_ring_3_2694443 00:07:25.767 end memzones------- 00:07:25.767 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:25.767 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:25.767 list of free elements. size: 10.862488 MiB 00:07:25.767 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:25.767 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:25.767 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:25.767 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:25.767 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:25.767 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:25.767 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:25.767 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:25.767 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:25.767 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:25.767 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:25.767 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:25.767 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:25.767 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:25.767 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:25.767 list of standard malloc elements. size: 199.218628 MiB 00:07:25.767 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:25.767 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:25.767 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:25.767 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:25.767 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:25.767 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:25.767 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:25.767 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:25.767 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:25.767 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:25.767 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:25.767 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:25.767 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:25.767 list of memzone associated elements. size: 599.918884 MiB 00:07:25.767 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:25.767 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:25.767 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:25.767 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:25.767 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:25.767 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2694443_0 00:07:25.767 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:25.767 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2694443_0 00:07:25.767 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:25.767 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2694443_0 00:07:25.767 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:25.767 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:25.767 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:25.767 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:25.767 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:25.767 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2694443_0 00:07:25.767 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:25.767 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2694443 00:07:25.767 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:25.767 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2694443 00:07:25.767 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:25.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:25.767 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:25.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:25.767 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:25.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:25.767 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:25.767 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:25.767 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:25.767 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2694443 00:07:25.767 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:25.767 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2694443 00:07:25.767 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:25.767 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2694443 00:07:25.767 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:25.767 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2694443 00:07:25.767 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:25.767 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2694443 00:07:25.767 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:25.767 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2694443 00:07:25.767 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:25.767 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:25.767 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:25.767 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:25.767 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:25.767 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:25.767 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:25.767 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2694443 00:07:25.767 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:25.767 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2694443 00:07:25.767 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:25.767 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:25.767 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:25.768 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:25.768 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:25.768 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2694443 00:07:25.768 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:25.768 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:25.768 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:25.768 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2694443 00:07:25.768 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:25.768 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2694443 00:07:25.768 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:25.768 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2694443 00:07:25.768 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:25.768 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:25.768 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:25.768 12:21:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2694443 00:07:25.768 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2694443 ']' 00:07:25.768 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2694443 00:07:25.768 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:25.768 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.768 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694443 00:07:26.026 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.026 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.026 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694443' 00:07:26.026 killing process with pid 2694443 00:07:26.026 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2694443 00:07:26.026 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2694443 00:07:26.285 00:07:26.285 real 0m1.119s 00:07:26.285 user 0m1.171s 00:07:26.285 sys 0m0.432s 00:07:26.285 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.285 12:21:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 ************************************ 00:07:26.285 END TEST dpdk_mem_utility 00:07:26.285 ************************************ 00:07:26.285 12:21:31 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:26.285 12:21:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.285 12:21:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.285 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 ************************************ 00:07:26.285 START TEST event 00:07:26.285 ************************************ 00:07:26.285 12:21:31 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:26.285 * Looking for test storage... 00:07:26.285 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:26.285 12:21:31 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.285 12:21:31 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.285 12:21:31 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.543 12:21:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.543 12:21:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.543 12:21:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.543 12:21:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.543 12:21:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.543 12:21:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.543 12:21:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.543 12:21:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.543 12:21:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.543 12:21:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.543 12:21:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.543 12:21:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.543 12:21:32 event -- scripts/common.sh@344 -- # case "$op" in 00:07:26.543 12:21:32 event -- scripts/common.sh@345 -- # : 1 00:07:26.543 12:21:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.543 12:21:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.543 12:21:32 event -- scripts/common.sh@365 -- # decimal 1 00:07:26.543 12:21:32 event -- scripts/common.sh@353 -- # local d=1 00:07:26.543 12:21:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.543 12:21:32 event -- scripts/common.sh@355 -- # echo 1 00:07:26.543 12:21:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.543 12:21:32 event -- scripts/common.sh@366 -- # decimal 2 00:07:26.544 12:21:32 event -- scripts/common.sh@353 -- # local d=2 00:07:26.544 12:21:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.544 12:21:32 event -- scripts/common.sh@355 -- # echo 2 00:07:26.544 12:21:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.544 12:21:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.544 12:21:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.544 12:21:32 event -- scripts/common.sh@368 -- # return 0 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.544 --rc genhtml_branch_coverage=1 00:07:26.544 --rc genhtml_function_coverage=1 00:07:26.544 --rc genhtml_legend=1 00:07:26.544 --rc geninfo_all_blocks=1 00:07:26.544 --rc geninfo_unexecuted_blocks=1 00:07:26.544 00:07:26.544 ' 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.544 --rc genhtml_branch_coverage=1 00:07:26.544 --rc genhtml_function_coverage=1 00:07:26.544 --rc genhtml_legend=1 00:07:26.544 --rc geninfo_all_blocks=1 00:07:26.544 --rc geninfo_unexecuted_blocks=1 00:07:26.544 00:07:26.544 ' 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.544 --rc genhtml_branch_coverage=1 00:07:26.544 --rc genhtml_function_coverage=1 00:07:26.544 --rc genhtml_legend=1 00:07:26.544 --rc geninfo_all_blocks=1 00:07:26.544 --rc geninfo_unexecuted_blocks=1 00:07:26.544 00:07:26.544 ' 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.544 --rc genhtml_branch_coverage=1 00:07:26.544 --rc genhtml_function_coverage=1 00:07:26.544 --rc genhtml_legend=1 00:07:26.544 --rc geninfo_all_blocks=1 00:07:26.544 --rc geninfo_unexecuted_blocks=1 00:07:26.544 00:07:26.544 ' 00:07:26.544 12:21:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:26.544 12:21:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.544 12:21:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:26.544 12:21:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.544 12:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.544 ************************************ 00:07:26.544 START TEST event_perf 00:07:26.544 ************************************ 00:07:26.544 12:21:32 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.544 Running I/O for 1 seconds...[2024-11-20 12:21:32.098081] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:26.544 [2024-11-20 12:21:32.098159] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694600 ] 00:07:26.544 [2024-11-20 12:21:32.168383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.544 [2024-11-20 12:21:32.235112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.544 [2024-11-20 12:21:32.235196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.544 [2024-11-20 12:21:32.235251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.544 [2024-11-20 12:21:32.235255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.920 Running I/O for 1 seconds... 00:07:27.920 lcore 0: 224101 00:07:27.920 lcore 1: 224101 00:07:27.920 lcore 2: 224101 00:07:27.920 lcore 3: 224101 00:07:27.920 done. 00:07:27.920 00:07:27.920 real 0m1.215s 00:07:27.920 user 0m4.143s 00:07:27.920 sys 0m0.066s 00:07:27.920 12:21:33 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.920 12:21:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.920 ************************************ 00:07:27.920 END TEST event_perf 00:07:27.920 ************************************ 00:07:27.920 12:21:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.920 12:21:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:27.920 12:21:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.920 12:21:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.920 ************************************ 00:07:27.920 START TEST event_reactor 00:07:27.920 ************************************ 00:07:27.920 12:21:33 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.920 [2024-11-20 12:21:33.350449] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:27.920 [2024-11-20 12:21:33.350588] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694782 ] 00:07:27.920 [2024-11-20 12:21:33.426752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.920 [2024-11-20 12:21:33.490937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.856 test_start 00:07:28.856 oneshot 00:07:28.856 tick 100 00:07:28.856 tick 100 00:07:28.856 tick 250 00:07:28.856 tick 100 00:07:28.856 tick 100 00:07:28.856 tick 100 00:07:28.856 tick 250 00:07:28.856 tick 500 00:07:28.856 tick 100 00:07:28.856 tick 100 00:07:28.856 tick 250 00:07:28.856 tick 100 00:07:28.856 tick 100 00:07:28.856 test_end 00:07:28.856 00:07:28.856 real 0m1.223s 00:07:28.856 user 0m1.147s 00:07:28.856 sys 0m0.071s 00:07:28.856 12:21:34 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.856 12:21:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:28.856 ************************************ 00:07:28.856 END TEST event_reactor 00:07:28.856 ************************************ 00:07:28.856 12:21:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.856 12:21:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:28.856 12:21:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.856 12:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.856 ************************************ 00:07:28.856 START TEST event_reactor_perf 00:07:28.856 ************************************ 00:07:28.856 12:21:34 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.856 [2024-11-20 12:21:34.604609] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:28.856 [2024-11-20 12:21:34.604696] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694920 ] 00:07:29.115 [2024-11-20 12:21:34.675387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.115 [2024-11-20 12:21:34.739825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.052 test_start 00:07:30.052 test_end 00:07:30.052 Performance: 327598 events per second 00:07:30.052 00:07:30.052 real 0m1.213s 00:07:30.052 user 0m1.146s 00:07:30.052 sys 0m0.061s 00:07:30.052 12:21:35 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.052 12:21:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.052 ************************************ 00:07:30.052 END TEST event_reactor_perf 00:07:30.052 ************************************ 00:07:30.345 12:21:35 event -- event/event.sh@49 -- # uname -s 00:07:30.345 12:21:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:30.345 12:21:35 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.345 12:21:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.345 12:21:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.345 12:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.345 ************************************ 00:07:30.345 START TEST event_scheduler 00:07:30.345 ************************************ 00:07:30.345 12:21:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.345 * Looking for test storage... 00:07:30.345 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:30.345 12:21:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.345 12:21:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.345 12:21:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.346 12:21:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.346 --rc genhtml_branch_coverage=1 00:07:30.346 --rc genhtml_function_coverage=1 00:07:30.346 --rc genhtml_legend=1 00:07:30.346 --rc geninfo_all_blocks=1 00:07:30.346 --rc geninfo_unexecuted_blocks=1 00:07:30.346 00:07:30.346 ' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.346 --rc genhtml_branch_coverage=1 00:07:30.346 --rc genhtml_function_coverage=1 00:07:30.346 --rc genhtml_legend=1 00:07:30.346 --rc geninfo_all_blocks=1 00:07:30.346 --rc geninfo_unexecuted_blocks=1 00:07:30.346 00:07:30.346 ' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.346 --rc genhtml_branch_coverage=1 00:07:30.346 --rc genhtml_function_coverage=1 00:07:30.346 --rc genhtml_legend=1 00:07:30.346 --rc geninfo_all_blocks=1 00:07:30.346 --rc geninfo_unexecuted_blocks=1 00:07:30.346 00:07:30.346 ' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.346 --rc genhtml_branch_coverage=1 00:07:30.346 --rc genhtml_function_coverage=1 00:07:30.346 --rc genhtml_legend=1 00:07:30.346 --rc geninfo_all_blocks=1 00:07:30.346 --rc geninfo_unexecuted_blocks=1 00:07:30.346 00:07:30.346 ' 00:07:30.346 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:30.346 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2695073 00:07:30.346 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:30.346 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.346 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2695073 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2695073 ']' 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.346 12:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.346 [2024-11-20 12:21:36.066836] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:30.346 [2024-11-20 12:21:36.066942] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695073 ] 00:07:30.607 [2024-11-20 12:21:36.141250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.607 [2024-11-20 12:21:36.209997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.607 [2024-11-20 12:21:36.210051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.607 [2024-11-20 12:21:36.210123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.607 [2024-11-20 12:21:36.210103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.607 12:21:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.607 12:21:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:30.607 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:30.607 12:21:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.607 12:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.607 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:07:30.607 POWER: Cannot get available frequencies of lcore 0 00:07:30.868 [2024-11-20 12:21:36.384672] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:30.868 [2024-11-20 12:21:36.384707] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:30.868 [2024-11-20 12:21:36.384720] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.868 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.868 [2024-11-20 12:21:36.490550] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.868 12:21:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.868 12:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.868 ************************************ 00:07:30.868 START TEST scheduler_create_thread 00:07:30.868 ************************************ 00:07:30.868 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 2 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 3 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 4 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 5 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 6 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 7 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 8 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 9 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 10 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.869 12:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.440 12:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.440 12:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:31.440 12:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:31.440 12:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.440 12:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.822 12:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.822 00:07:32.822 real 0m1.760s 00:07:32.822 user 0m0.017s 00:07:32.822 sys 0m0.004s 00:07:32.822 12:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.822 12:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.822 ************************************ 00:07:32.822 END TEST scheduler_create_thread 00:07:32.822 ************************************ 00:07:32.822 12:21:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:32.822 12:21:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2695073 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2695073 ']' 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2695073 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2695073 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2695073' 00:07:32.822 killing process with pid 2695073 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2695073 00:07:32.822 12:21:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2695073 00:07:33.083 [2024-11-20 12:21:38.738946] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.342 00:07:33.342 real 0m3.067s 00:07:33.342 user 0m4.228s 00:07:33.342 sys 0m0.361s 00:07:33.342 12:21:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.342 12:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.342 ************************************ 00:07:33.342 END TEST event_scheduler 00:07:33.342 ************************************ 00:07:33.342 12:21:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.342 12:21:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.342 12:21:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.342 12:21:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.342 12:21:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.342 ************************************ 00:07:33.342 START TEST app_repeat 00:07:33.342 ************************************ 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2695368 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2695368' 00:07:33.342 Process app_repeat pid: 2695368 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.342 spdk_app_start Round 0 00:07:33.342 12:21:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2695368 /var/tmp/spdk-nbd.sock 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2695368 ']' 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.342 12:21:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.342 [2024-11-20 12:21:38.982460] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:33.342 [2024-11-20 12:21:38.982579] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695368 ] 00:07:33.342 [2024-11-20 12:21:39.055558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.609 [2024-11-20 12:21:39.119636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.609 [2024-11-20 12:21:39.119701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.609 12:21:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.609 12:21:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:33.609 12:21:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.869 Malloc0 00:07:34.127 12:21:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.387 Malloc1 00:07:34.387 12:21:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.387 12:21:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.646 /dev/nbd0 00:07:34.646 12:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.646 12:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.646 1+0 records in 00:07:34.646 1+0 records out 00:07:34.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219895 s, 18.6 MB/s 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.646 12:21:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:34.646 12:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.646 12:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.646 12:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:35.213 /dev/nbd1 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:35.213 1+0 records in 00:07:35.213 1+0 records out 00:07:35.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246011 s, 16.6 MB/s 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.213 12:21:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.213 12:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:35.472 { 00:07:35.472 "nbd_device": "/dev/nbd0", 00:07:35.472 "bdev_name": "Malloc0" 00:07:35.472 }, 00:07:35.472 { 00:07:35.472 "nbd_device": "/dev/nbd1", 00:07:35.472 "bdev_name": "Malloc1" 00:07:35.472 } 00:07:35.472 ]' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:35.472 { 00:07:35.472 "nbd_device": "/dev/nbd0", 00:07:35.472 "bdev_name": "Malloc0" 00:07:35.472 }, 00:07:35.472 { 00:07:35.472 "nbd_device": "/dev/nbd1", 00:07:35.472 "bdev_name": "Malloc1" 00:07:35.472 } 00:07:35.472 ]' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:35.472 /dev/nbd1' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:35.472 /dev/nbd1' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:35.472 256+0 records in 00:07:35.472 256+0 records out 00:07:35.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649551 s, 161 MB/s 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:35.472 256+0 records in 00:07:35.472 256+0 records out 00:07:35.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250859 s, 41.8 MB/s 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.472 12:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.731 256+0 records in 00:07:35.731 256+0 records out 00:07:35.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268195 s, 39.1 MB/s 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.731 12:21:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.732 12:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.991 12:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.249 12:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.816 12:21:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.816 12:21:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:37.076 12:21:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:37.336 [2024-11-20 12:21:42.849559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.336 [2024-11-20 12:21:42.913375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.336 [2024-11-20 12:21:42.913375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.336 [2024-11-20 12:21:42.964463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:37.336 [2024-11-20 12:21:42.964537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:40.633 12:21:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:40.633 12:21:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:40.633 spdk_app_start Round 1 00:07:40.633 12:21:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2695368 /var/tmp/spdk-nbd.sock 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2695368 ']' 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.633 12:21:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.633 12:21:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.633 12:21:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:40.633 12:21:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.633 Malloc0 00:07:40.634 12:21:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.201 Malloc1 00:07:41.201 12:21:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.201 12:21:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.460 /dev/nbd0 00:07:41.460 12:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.460 12:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.460 1+0 records in 00:07:41.460 1+0 records out 00:07:41.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194616 s, 21.0 MB/s 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.460 12:21:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:41.460 12:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.460 12:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.460 12:21:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.026 /dev/nbd1 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.026 1+0 records in 00:07:42.026 1+0 records out 00:07:42.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293525 s, 14.0 MB/s 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.026 12:21:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.026 12:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.285 { 00:07:42.285 "nbd_device": "/dev/nbd0", 00:07:42.285 "bdev_name": "Malloc0" 00:07:42.285 }, 00:07:42.285 { 00:07:42.285 "nbd_device": "/dev/nbd1", 00:07:42.285 "bdev_name": "Malloc1" 00:07:42.285 } 00:07:42.285 ]' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.285 { 00:07:42.285 "nbd_device": "/dev/nbd0", 00:07:42.285 "bdev_name": "Malloc0" 00:07:42.285 }, 00:07:42.285 { 00:07:42.285 "nbd_device": "/dev/nbd1", 00:07:42.285 "bdev_name": "Malloc1" 00:07:42.285 } 00:07:42.285 ]' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:42.285 /dev/nbd1' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:42.285 /dev/nbd1' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:42.285 256+0 records in 00:07:42.285 256+0 records out 00:07:42.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752652 s, 139 MB/s 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:42.285 256+0 records in 00:07:42.285 256+0 records out 00:07:42.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247376 s, 42.4 MB/s 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:42.285 256+0 records in 00:07:42.285 256+0 records out 00:07:42.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260987 s, 40.2 MB/s 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.285 12:21:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.286 12:21:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.851 12:21:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.109 12:21:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.366 12:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:43.367 12:21:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:43.367 12:21:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.932 12:21:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:43.932 [2024-11-20 12:21:49.610730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.932 [2024-11-20 12:21:49.673428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.932 [2024-11-20 12:21:49.673430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.191 [2024-11-20 12:21:49.725804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:44.191 [2024-11-20 12:21:49.725872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.719 12:21:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.719 12:21:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:46.719 spdk_app_start Round 2 00:07:46.719 12:21:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2695368 /var/tmp/spdk-nbd.sock 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2695368 ']' 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.719 12:21:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.284 12:21:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.284 12:21:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:47.284 12:21:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.547 Malloc0 00:07:47.547 12:21:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.809 Malloc1 00:07:47.809 12:21:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.809 12:21:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:48.375 /dev/nbd0 00:07:48.376 12:21:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.376 12:21:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.376 1+0 records in 00:07:48.376 1+0 records out 00:07:48.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186159 s, 22.0 MB/s 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.376 12:21:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:48.376 12:21:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.376 12:21:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.376 12:21:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:48.634 /dev/nbd1 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.634 1+0 records in 00:07:48.634 1+0 records out 00:07:48.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216152 s, 18.9 MB/s 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.634 12:21:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.634 12:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.892 { 00:07:48.892 "nbd_device": "/dev/nbd0", 00:07:48.892 "bdev_name": "Malloc0" 00:07:48.892 }, 00:07:48.892 { 00:07:48.892 "nbd_device": "/dev/nbd1", 00:07:48.892 "bdev_name": "Malloc1" 00:07:48.892 } 00:07:48.892 ]' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.892 { 00:07:48.892 "nbd_device": "/dev/nbd0", 00:07:48.892 "bdev_name": "Malloc0" 00:07:48.892 }, 00:07:48.892 { 00:07:48.892 "nbd_device": "/dev/nbd1", 00:07:48.892 "bdev_name": "Malloc1" 00:07:48.892 } 00:07:48.892 ]' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.892 /dev/nbd1' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.892 /dev/nbd1' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.892 12:21:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:49.150 256+0 records in 00:07:49.150 256+0 records out 00:07:49.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522104 s, 201 MB/s 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.150 256+0 records in 00:07:49.150 256+0 records out 00:07:49.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237404 s, 44.2 MB/s 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.150 256+0 records in 00:07:49.150 256+0 records out 00:07:49.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025093 s, 41.8 MB/s 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.150 12:21:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.409 12:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.975 12:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.975 12:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.976 12:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:50.233 12:21:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:50.233 12:21:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:50.491 12:21:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:50.749 [2024-11-20 12:21:56.330080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.749 [2024-11-20 12:21:56.393856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.749 [2024-11-20 12:21:56.393924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.749 [2024-11-20 12:21:56.444987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:50.749 [2024-11-20 12:21:56.445056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:54.033 12:21:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2695368 /var/tmp/spdk-nbd.sock 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2695368 ']' 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:54.033 12:21:59 event.app_repeat -- event/event.sh@39 -- # killprocess 2695368 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2695368 ']' 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2695368 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2695368 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2695368' 00:07:54.033 killing process with pid 2695368 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2695368 00:07:54.033 12:21:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2695368 00:07:54.033 spdk_app_start is called in Round 0. 00:07:54.033 Shutdown signal received, stop current app iteration 00:07:54.033 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:07:54.033 spdk_app_start is called in Round 1. 00:07:54.033 Shutdown signal received, stop current app iteration 00:07:54.033 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:07:54.033 spdk_app_start is called in Round 2. 00:07:54.033 Shutdown signal received, stop current app iteration 00:07:54.034 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:07:54.034 spdk_app_start is called in Round 3. 00:07:54.034 Shutdown signal received, stop current app iteration 00:07:54.034 12:21:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:54.034 12:21:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:54.034 00:07:54.034 real 0m20.760s 00:07:54.034 user 0m47.037s 00:07:54.034 sys 0m3.628s 00:07:54.034 12:21:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.034 12:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.034 ************************************ 00:07:54.034 END TEST app_repeat 00:07:54.034 ************************************ 00:07:54.034 12:21:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:54.034 12:21:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:54.034 12:21:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.034 12:21:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.034 12:21:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.034 ************************************ 00:07:54.034 START TEST cpu_locks 00:07:54.034 ************************************ 00:07:54.034 12:21:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:54.293 * Looking for test storage... 00:07:54.293 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.293 12:21:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.293 12:21:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.294 --rc genhtml_branch_coverage=1 00:07:54.294 --rc genhtml_function_coverage=1 00:07:54.294 --rc genhtml_legend=1 00:07:54.294 --rc geninfo_all_blocks=1 00:07:54.294 --rc geninfo_unexecuted_blocks=1 00:07:54.294 00:07:54.294 ' 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.294 --rc genhtml_branch_coverage=1 00:07:54.294 --rc genhtml_function_coverage=1 00:07:54.294 --rc genhtml_legend=1 00:07:54.294 --rc geninfo_all_blocks=1 00:07:54.294 --rc geninfo_unexecuted_blocks=1 00:07:54.294 00:07:54.294 ' 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.294 --rc genhtml_branch_coverage=1 00:07:54.294 --rc genhtml_function_coverage=1 00:07:54.294 --rc genhtml_legend=1 00:07:54.294 --rc geninfo_all_blocks=1 00:07:54.294 --rc geninfo_unexecuted_blocks=1 00:07:54.294 00:07:54.294 ' 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.294 --rc genhtml_branch_coverage=1 00:07:54.294 --rc genhtml_function_coverage=1 00:07:54.294 --rc genhtml_legend=1 00:07:54.294 --rc geninfo_all_blocks=1 00:07:54.294 --rc geninfo_unexecuted_blocks=1 00:07:54.294 00:07:54.294 ' 00:07:54.294 12:21:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:54.294 12:21:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:54.294 12:21:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:54.294 12:21:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.294 12:21:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 ************************************ 00:07:54.294 START TEST default_locks 00:07:54.294 ************************************ 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2697410 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2697410 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2697410 ']' 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.294 12:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.294 [2024-11-20 12:21:59.992538] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:54.294 [2024-11-20 12:21:59.992631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697410 ] 00:07:54.552 [2024-11-20 12:22:00.064068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.552 [2024-11-20 12:22:00.126911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.810 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.810 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:54.810 12:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2697410 00:07:54.810 12:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.810 12:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2697410 00:07:55.068 lslocks: write error 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2697410 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2697410 ']' 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2697410 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.068 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697410 00:07:55.326 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.326 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.326 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697410' 00:07:55.326 killing process with pid 2697410 00:07:55.326 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2697410 00:07:55.326 12:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2697410 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2697410 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2697410 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2697410 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2697410 ']' 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2697410) - No such process 00:07:55.584 ERROR: process (pid: 2697410) is no longer running 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:55.584 00:07:55.584 real 0m1.213s 00:07:55.584 user 0m1.260s 00:07:55.584 sys 0m0.553s 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.584 12:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 ************************************ 00:07:55.584 END TEST default_locks 00:07:55.584 ************************************ 00:07:55.584 12:22:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:55.584 12:22:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.584 12:22:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.584 12:22:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 ************************************ 00:07:55.584 START TEST default_locks_via_rpc 00:07:55.584 ************************************ 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2697576 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2697576 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2697576 ']' 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.584 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.584 [2024-11-20 12:22:01.306300] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:55.584 [2024-11-20 12:22:01.306472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697576 ] 00:07:55.842 [2024-11-20 12:22:01.395036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.842 [2024-11-20 12:22:01.457706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2697576 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2697576 00:07:56.100 12:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2697576 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2697576 ']' 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2697576 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697576 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697576' 00:07:56.667 killing process with pid 2697576 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2697576 00:07:56.667 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2697576 00:07:56.925 00:07:56.925 real 0m1.399s 00:07:56.925 user 0m1.452s 00:07:56.925 sys 0m0.590s 00:07:56.925 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.925 12:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.925 ************************************ 00:07:56.925 END TEST default_locks_via_rpc 00:07:56.925 ************************************ 00:07:56.925 12:22:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:56.925 12:22:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.925 12:22:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.925 12:22:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.925 ************************************ 00:07:56.925 START TEST non_locking_app_on_locked_coremask 00:07:56.925 ************************************ 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2697710 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2697710 /var/tmp/spdk.sock 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2697710 ']' 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.925 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.926 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.926 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.926 12:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.184 [2024-11-20 12:22:02.691457] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:57.184 [2024-11-20 12:22:02.691577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697710 ] 00:07:57.184 [2024-11-20 12:22:02.763903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.184 [2024-11-20 12:22:02.828370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2697713 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2697713 /var/tmp/spdk2.sock 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2697713 ']' 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:57.442 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.442 [2024-11-20 12:22:03.153106] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:57.442 [2024-11-20 12:22:03.153210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697713 ] 00:07:57.701 [2024-11-20 12:22:03.265846] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:57.701 [2024-11-20 12:22:03.265886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.701 [2024-11-20 12:22:03.388925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.267 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.268 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:58.268 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2697710 00:07:58.268 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.268 12:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2697710 00:07:58.833 lslocks: write error 00:07:58.833 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2697710 00:07:58.833 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2697710 ']' 00:07:58.833 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2697710 00:07:58.833 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.833 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.834 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697710 00:07:59.091 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.091 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.091 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697710' 00:07:59.091 killing process with pid 2697710 00:07:59.091 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2697710 00:07:59.091 12:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2697710 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2697713 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2697713 ']' 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2697713 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697713 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697713' 00:07:59.658 killing process with pid 2697713 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2697713 00:07:59.658 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2697713 00:07:59.916 00:07:59.916 real 0m2.920s 00:07:59.916 user 0m3.048s 00:07:59.916 sys 0m1.041s 00:07:59.916 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.916 12:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.916 ************************************ 00:07:59.916 END TEST non_locking_app_on_locked_coremask 00:07:59.916 ************************************ 00:07:59.916 12:22:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:59.916 12:22:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.916 12:22:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.916 12:22:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.916 ************************************ 00:07:59.916 START TEST locking_app_on_unlocked_coremask 00:07:59.916 ************************************ 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2698018 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2698018 /var/tmp/spdk.sock 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698018 ']' 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.916 12:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.916 [2024-11-20 12:22:05.678231] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:59.916 [2024-11-20 12:22:05.678340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698018 ] 00:08:00.218 [2024-11-20 12:22:05.753004] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.218 [2024-11-20 12:22:05.753039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.218 [2024-11-20 12:22:05.815920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2698033 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2698033 /var/tmp/spdk2.sock 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698033 ']' 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.475 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.475 [2024-11-20 12:22:06.129172] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:00.475 [2024-11-20 12:22:06.129270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698033 ] 00:08:00.731 [2024-11-20 12:22:06.242171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.731 [2024-11-20 12:22:06.370180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.294 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.294 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.295 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2698033 00:08:01.295 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2698033 00:08:01.295 12:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.228 lslocks: write error 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2698018 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2698018 ']' 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2698018 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698018 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698018' 00:08:02.228 killing process with pid 2698018 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2698018 00:08:02.228 12:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2698018 00:08:02.795 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2698033 00:08:02.795 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2698033 ']' 00:08:02.795 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2698033 00:08:02.795 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:02.795 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698033 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698033' 00:08:02.796 killing process with pid 2698033 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2698033 00:08:02.796 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2698033 00:08:03.054 00:08:03.054 real 0m3.101s 00:08:03.054 user 0m3.289s 00:08:03.054 sys 0m1.095s 00:08:03.054 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.054 12:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.054 ************************************ 00:08:03.054 END TEST locking_app_on_unlocked_coremask 00:08:03.054 ************************************ 00:08:03.054 12:22:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:03.054 12:22:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.054 12:22:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.054 12:22:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.054 ************************************ 00:08:03.054 START TEST locking_app_on_locked_coremask 00:08:03.055 ************************************ 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2698261 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2698261 /var/tmp/spdk.sock 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698261 ']' 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.055 12:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.055 [2024-11-20 12:22:08.790104] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:03.055 [2024-11-20 12:22:08.790194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698261 ] 00:08:03.315 [2024-11-20 12:22:08.861992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.315 [2024-11-20 12:22:08.924938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2698352 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2698352 /var/tmp/spdk2.sock 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2698352 /var/tmp/spdk2.sock 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2698352 /var/tmp/spdk2.sock 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698352 ']' 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.575 12:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.575 [2024-11-20 12:22:09.236716] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:03.575 [2024-11-20 12:22:09.236820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698352 ] 00:08:03.832 [2024-11-20 12:22:09.350121] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2698261 has claimed it. 00:08:03.833 [2024-11-20 12:22:09.350177] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:04.399 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2698352) - No such process 00:08:04.399 ERROR: process (pid: 2698352) is no longer running 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2698261 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2698261 00:08:04.399 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:04.966 lslocks: write error 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2698261 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2698261 ']' 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2698261 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698261 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698261' 00:08:04.966 killing process with pid 2698261 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2698261 00:08:04.966 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2698261 00:08:05.271 00:08:05.271 real 0m2.201s 00:08:05.271 user 0m2.572s 00:08:05.271 sys 0m0.699s 00:08:05.271 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.271 12:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.271 ************************************ 00:08:05.271 END TEST locking_app_on_locked_coremask 00:08:05.271 ************************************ 00:08:05.271 12:22:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:05.271 12:22:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.271 12:22:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.271 12:22:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.271 ************************************ 00:08:05.271 START TEST locking_overlapped_coremask 00:08:05.271 ************************************ 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2698493 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2698493 /var/tmp/spdk.sock 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698493 ']' 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.271 12:22:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 [2024-11-20 12:22:11.022074] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:05.582 [2024-11-20 12:22:11.022178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698493 ] 00:08:05.582 [2024-11-20 12:22:11.094068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.582 [2024-11-20 12:22:11.160165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.582 [2024-11-20 12:22:11.160259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.582 [2024-11-20 12:22:11.160263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2698580 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2698580 /var/tmp/spdk2.sock 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2698580 /var/tmp/spdk2.sock 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2698580 /var/tmp/spdk2.sock 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2698580 ']' 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.840 12:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 [2024-11-20 12:22:11.477140] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:05.840 [2024-11-20 12:22:11.477250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698580 ] 00:08:05.840 [2024-11-20 12:22:11.591374] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2698493 has claimed it. 00:08:05.840 [2024-11-20 12:22:11.591439] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:06.773 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2698580) - No such process 00:08:06.773 ERROR: process (pid: 2698580) is no longer running 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2698493 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2698493 ']' 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2698493 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698493 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698493' 00:08:06.773 killing process with pid 2698493 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2698493 00:08:06.773 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2698493 00:08:07.031 00:08:07.031 real 0m1.634s 00:08:07.031 user 0m4.642s 00:08:07.031 sys 0m0.441s 00:08:07.031 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.031 12:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 ************************************ 00:08:07.031 END TEST locking_overlapped_coremask 00:08:07.031 ************************************ 00:08:07.031 12:22:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:07.031 12:22:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.031 12:22:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.031 12:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.031 ************************************ 00:08:07.031 START TEST locking_overlapped_coremask_via_rpc 00:08:07.031 ************************************ 00:08:07.031 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:07.031 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2698705 00:08:07.031 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2698705 /var/tmp/spdk.sock 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2698705 ']' 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.032 12:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.032 [2024-11-20 12:22:12.689185] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:07.032 [2024-11-20 12:22:12.689277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698705 ] 00:08:07.032 [2024-11-20 12:22:12.760418] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.032 [2024-11-20 12:22:12.760452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.290 [2024-11-20 12:22:12.826350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.290 [2024-11-20 12:22:12.826474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.290 [2024-11-20 12:22:12.826478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2698716 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2698716 /var/tmp/spdk2.sock 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2698716 ']' 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:07.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.549 12:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.549 [2024-11-20 12:22:13.145873] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:07.549 [2024-11-20 12:22:13.145974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698716 ] 00:08:07.549 [2024-11-20 12:22:13.258464] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.549 [2024-11-20 12:22:13.258514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.807 [2024-11-20 12:22:13.389153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.808 [2024-11-20 12:22:13.392530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.808 [2024-11-20 12:22:13.392533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.742 [2024-11-20 12:22:14.258585] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2698705 has claimed it. 00:08:08.742 request: 00:08:08.742 { 00:08:08.742 "method": "framework_enable_cpumask_locks", 00:08:08.742 "req_id": 1 00:08:08.742 } 00:08:08.742 Got JSON-RPC error response 00:08:08.742 response: 00:08:08.742 { 00:08:08.742 "code": -32603, 00:08:08.742 "message": "Failed to claim CPU core: 2" 00:08:08.742 } 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2698705 /var/tmp/spdk.sock 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2698705 ']' 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.742 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.743 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.743 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.743 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2698716 /var/tmp/spdk2.sock 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2698716 ']' 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.001 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:09.260 00:08:09.260 real 0m2.308s 00:08:09.260 user 0m1.406s 00:08:09.260 sys 0m0.214s 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.260 12:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.260 ************************************ 00:08:09.260 END TEST locking_overlapped_coremask_via_rpc 00:08:09.260 ************************************ 00:08:09.260 12:22:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:09.260 12:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2698705 ]] 00:08:09.260 12:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2698705 00:08:09.260 12:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2698705 ']' 00:08:09.260 12:22:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2698705 00:08:09.260 12:22:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:09.260 12:22:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.260 12:22:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698705 00:08:09.260 12:22:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.260 12:22:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.260 12:22:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698705' 00:08:09.260 killing process with pid 2698705 00:08:09.260 12:22:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2698705 00:08:09.260 12:22:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2698705 00:08:09.827 12:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2698716 ]] 00:08:09.827 12:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2698716 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2698716 ']' 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2698716 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698716 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698716' 00:08:09.827 killing process with pid 2698716 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2698716 00:08:09.827 12:22:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2698716 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2698705 ]] 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2698705 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2698705 ']' 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2698705 00:08:10.085 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2698705) - No such process 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2698705 is not found' 00:08:10.085 Process with pid 2698705 is not found 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2698716 ]] 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2698716 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2698716 ']' 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2698716 00:08:10.085 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2698716) - No such process 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2698716 is not found' 00:08:10.085 Process with pid 2698716 is not found 00:08:10.085 12:22:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:10.085 00:08:10.085 real 0m15.924s 00:08:10.085 user 0m29.689s 00:08:10.085 sys 0m5.503s 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.085 12:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.085 ************************************ 00:08:10.085 END TEST cpu_locks 00:08:10.085 ************************************ 00:08:10.085 00:08:10.085 real 0m43.819s 00:08:10.085 user 1m27.576s 00:08:10.085 sys 0m9.967s 00:08:10.085 12:22:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.085 12:22:15 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.085 ************************************ 00:08:10.085 END TEST event 00:08:10.085 ************************************ 00:08:10.085 12:22:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:10.085 12:22:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.085 12:22:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.085 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.085 ************************************ 00:08:10.085 START TEST thread 00:08:10.085 ************************************ 00:08:10.085 12:22:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:10.085 * Looking for test storage... 00:08:10.085 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:10.085 12:22:15 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.085 12:22:15 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.085 12:22:15 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.344 12:22:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.344 12:22:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.344 12:22:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.344 12:22:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.344 12:22:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.344 12:22:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.344 12:22:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.344 12:22:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.344 12:22:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.344 12:22:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.344 12:22:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.344 12:22:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:10.344 12:22:15 thread -- scripts/common.sh@345 -- # : 1 00:08:10.344 12:22:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.344 12:22:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.344 12:22:15 thread -- scripts/common.sh@365 -- # decimal 1 00:08:10.344 12:22:15 thread -- scripts/common.sh@353 -- # local d=1 00:08:10.344 12:22:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.344 12:22:15 thread -- scripts/common.sh@355 -- # echo 1 00:08:10.344 12:22:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.344 12:22:15 thread -- scripts/common.sh@366 -- # decimal 2 00:08:10.344 12:22:15 thread -- scripts/common.sh@353 -- # local d=2 00:08:10.344 12:22:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.344 12:22:15 thread -- scripts/common.sh@355 -- # echo 2 00:08:10.344 12:22:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.344 12:22:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.344 12:22:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.344 12:22:15 thread -- scripts/common.sh@368 -- # return 0 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.344 --rc genhtml_branch_coverage=1 00:08:10.344 --rc genhtml_function_coverage=1 00:08:10.344 --rc genhtml_legend=1 00:08:10.344 --rc geninfo_all_blocks=1 00:08:10.344 --rc geninfo_unexecuted_blocks=1 00:08:10.344 00:08:10.344 ' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.344 --rc genhtml_branch_coverage=1 00:08:10.344 --rc genhtml_function_coverage=1 00:08:10.344 --rc genhtml_legend=1 00:08:10.344 --rc geninfo_all_blocks=1 00:08:10.344 --rc geninfo_unexecuted_blocks=1 00:08:10.344 00:08:10.344 ' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.344 --rc genhtml_branch_coverage=1 00:08:10.344 --rc genhtml_function_coverage=1 00:08:10.344 --rc genhtml_legend=1 00:08:10.344 --rc geninfo_all_blocks=1 00:08:10.344 --rc geninfo_unexecuted_blocks=1 00:08:10.344 00:08:10.344 ' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.344 --rc genhtml_branch_coverage=1 00:08:10.344 --rc genhtml_function_coverage=1 00:08:10.344 --rc genhtml_legend=1 00:08:10.344 --rc geninfo_all_blocks=1 00:08:10.344 --rc geninfo_unexecuted_blocks=1 00:08:10.344 00:08:10.344 ' 00:08:10.344 12:22:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.344 12:22:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.344 ************************************ 00:08:10.344 START TEST thread_poller_perf 00:08:10.344 ************************************ 00:08:10.344 12:22:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:10.344 [2024-11-20 12:22:15.937174] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:10.344 [2024-11-20 12:22:15.937282] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699105 ] 00:08:10.344 [2024-11-20 12:22:16.013052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.344 [2024-11-20 12:22:16.077367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.344 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:11.718 [2024-11-20T11:22:17.484Z] ====================================== 00:08:11.718 [2024-11-20T11:22:17.484Z] busy:2713725588 (cyc) 00:08:11.718 [2024-11-20T11:22:17.484Z] total_run_count: 261000 00:08:11.718 [2024-11-20T11:22:17.484Z] tsc_hz: 2700000000 (cyc) 00:08:11.718 [2024-11-20T11:22:17.484Z] ====================================== 00:08:11.718 [2024-11-20T11:22:17.484Z] poller_cost: 10397 (cyc), 3850 (nsec) 00:08:11.718 00:08:11.718 real 0m1.233s 00:08:11.718 user 0m1.158s 00:08:11.718 sys 0m0.067s 00:08:11.718 12:22:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.718 12:22:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.718 ************************************ 00:08:11.718 END TEST thread_poller_perf 00:08:11.718 ************************************ 00:08:11.718 12:22:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:11.718 12:22:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:11.718 12:22:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.718 12:22:17 thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.718 ************************************ 00:08:11.718 START TEST thread_poller_perf 00:08:11.718 ************************************ 00:08:11.718 12:22:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:11.718 [2024-11-20 12:22:17.204615] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:11.718 [2024-11-20 12:22:17.204699] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699220 ] 00:08:11.718 [2024-11-20 12:22:17.275341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.718 [2024-11-20 12:22:17.339714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.718 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:12.654 [2024-11-20T11:22:18.420Z] ====================================== 00:08:12.654 [2024-11-20T11:22:18.420Z] busy:2702854836 (cyc) 00:08:12.654 [2024-11-20T11:22:18.420Z] total_run_count: 3657000 00:08:12.654 [2024-11-20T11:22:18.420Z] tsc_hz: 2700000000 (cyc) 00:08:12.654 [2024-11-20T11:22:18.420Z] ====================================== 00:08:12.654 [2024-11-20T11:22:18.420Z] poller_cost: 739 (cyc), 273 (nsec) 00:08:12.654 00:08:12.654 real 0m1.215s 00:08:12.654 user 0m1.146s 00:08:12.654 sys 0m0.062s 00:08:12.654 12:22:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.654 12:22:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:12.654 ************************************ 00:08:12.654 END TEST thread_poller_perf 00:08:12.654 ************************************ 00:08:12.914 12:22:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:12.914 00:08:12.914 real 0m2.701s 00:08:12.914 user 0m2.434s 00:08:12.914 sys 0m0.277s 00:08:12.914 12:22:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.914 12:22:18 thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.914 ************************************ 00:08:12.914 END TEST thread 00:08:12.914 ************************************ 00:08:12.914 12:22:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:12.914 12:22:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:12.914 12:22:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.914 12:22:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.914 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.914 ************************************ 00:08:12.914 START TEST app_cmdline 00:08:12.914 ************************************ 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:12.914 * Looking for test storage... 00:08:12.914 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.914 12:22:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.914 --rc genhtml_branch_coverage=1 00:08:12.914 --rc genhtml_function_coverage=1 00:08:12.914 --rc genhtml_legend=1 00:08:12.914 --rc geninfo_all_blocks=1 00:08:12.914 --rc geninfo_unexecuted_blocks=1 00:08:12.914 00:08:12.914 ' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.914 --rc genhtml_branch_coverage=1 00:08:12.914 --rc genhtml_function_coverage=1 00:08:12.914 --rc genhtml_legend=1 00:08:12.914 --rc geninfo_all_blocks=1 00:08:12.914 --rc geninfo_unexecuted_blocks=1 00:08:12.914 00:08:12.914 ' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.914 --rc genhtml_branch_coverage=1 00:08:12.914 --rc genhtml_function_coverage=1 00:08:12.914 --rc genhtml_legend=1 00:08:12.914 --rc geninfo_all_blocks=1 00:08:12.914 --rc geninfo_unexecuted_blocks=1 00:08:12.914 00:08:12.914 ' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.914 --rc genhtml_branch_coverage=1 00:08:12.914 --rc genhtml_function_coverage=1 00:08:12.914 --rc genhtml_legend=1 00:08:12.914 --rc geninfo_all_blocks=1 00:08:12.914 --rc geninfo_unexecuted_blocks=1 00:08:12.914 00:08:12.914 ' 00:08:12.914 12:22:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:12.914 12:22:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2699385 00:08:12.914 12:22:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:12.914 12:22:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2699385 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2699385 ']' 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.914 12:22:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.172 [2024-11-20 12:22:18.718061] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:13.172 [2024-11-20 12:22:18.718152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699385 ] 00:08:13.172 [2024-11-20 12:22:18.789731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.172 [2024-11-20 12:22:18.852862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.446 12:22:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.446 12:22:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:13.446 12:22:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:13.703 { 00:08:13.703 "version": "SPDK v25.01-pre git sha1 92fb22519", 00:08:13.703 "fields": { 00:08:13.703 "major": 25, 00:08:13.703 "minor": 1, 00:08:13.703 "patch": 0, 00:08:13.703 "suffix": "-pre", 00:08:13.703 "commit": "92fb22519" 00:08:13.703 } 00:08:13.703 } 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:13.703 12:22:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.703 12:22:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.703 12:22:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:13.703 12:22:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.961 12:22:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:13.961 12:22:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:13.961 12:22:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:13.961 12:22:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:14.220 request: 00:08:14.220 { 00:08:14.220 "method": "env_dpdk_get_mem_stats", 00:08:14.220 "req_id": 1 00:08:14.220 } 00:08:14.220 Got JSON-RPC error response 00:08:14.220 response: 00:08:14.220 { 00:08:14.220 "code": -32601, 00:08:14.220 "message": "Method not found" 00:08:14.220 } 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.220 12:22:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2699385 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2699385 ']' 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2699385 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699385 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699385' 00:08:14.220 killing process with pid 2699385 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 2699385 00:08:14.220 12:22:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 2699385 00:08:14.479 00:08:14.479 real 0m1.688s 00:08:14.479 user 0m2.238s 00:08:14.479 sys 0m0.492s 00:08:14.479 12:22:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.479 12:22:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.479 ************************************ 00:08:14.479 END TEST app_cmdline 00:08:14.479 ************************************ 00:08:14.479 12:22:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:14.479 12:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.479 12:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.479 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.479 ************************************ 00:08:14.479 START TEST version 00:08:14.479 ************************************ 00:08:14.479 12:22:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:14.739 * Looking for test storage... 00:08:14.739 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.739 12:22:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.739 12:22:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.739 12:22:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.739 12:22:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.739 12:22:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.739 12:22:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.739 12:22:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.739 12:22:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.739 12:22:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.739 12:22:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.739 12:22:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.739 12:22:20 version -- scripts/common.sh@344 -- # case "$op" in 00:08:14.739 12:22:20 version -- scripts/common.sh@345 -- # : 1 00:08:14.739 12:22:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.739 12:22:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.739 12:22:20 version -- scripts/common.sh@365 -- # decimal 1 00:08:14.739 12:22:20 version -- scripts/common.sh@353 -- # local d=1 00:08:14.739 12:22:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.739 12:22:20 version -- scripts/common.sh@355 -- # echo 1 00:08:14.739 12:22:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.739 12:22:20 version -- scripts/common.sh@366 -- # decimal 2 00:08:14.739 12:22:20 version -- scripts/common.sh@353 -- # local d=2 00:08:14.739 12:22:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.739 12:22:20 version -- scripts/common.sh@355 -- # echo 2 00:08:14.739 12:22:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.739 12:22:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.739 12:22:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.739 12:22:20 version -- scripts/common.sh@368 -- # return 0 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.739 --rc genhtml_branch_coverage=1 00:08:14.739 --rc genhtml_function_coverage=1 00:08:14.739 --rc genhtml_legend=1 00:08:14.739 --rc geninfo_all_blocks=1 00:08:14.739 --rc geninfo_unexecuted_blocks=1 00:08:14.739 00:08:14.739 ' 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.739 --rc genhtml_branch_coverage=1 00:08:14.739 --rc genhtml_function_coverage=1 00:08:14.739 --rc genhtml_legend=1 00:08:14.739 --rc geninfo_all_blocks=1 00:08:14.739 --rc geninfo_unexecuted_blocks=1 00:08:14.739 00:08:14.739 ' 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.739 --rc genhtml_branch_coverage=1 00:08:14.739 --rc genhtml_function_coverage=1 00:08:14.739 --rc genhtml_legend=1 00:08:14.739 --rc geninfo_all_blocks=1 00:08:14.739 --rc geninfo_unexecuted_blocks=1 00:08:14.739 00:08:14.739 ' 00:08:14.739 12:22:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.739 --rc genhtml_branch_coverage=1 00:08:14.739 --rc genhtml_function_coverage=1 00:08:14.739 --rc genhtml_legend=1 00:08:14.739 --rc geninfo_all_blocks=1 00:08:14.739 --rc geninfo_unexecuted_blocks=1 00:08:14.739 00:08:14.739 ' 00:08:14.739 12:22:20 version -- app/version.sh@17 -- # get_header_version major 00:08:14.739 12:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # cut -f2 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.739 12:22:20 version -- app/version.sh@17 -- # major=25 00:08:14.739 12:22:20 version -- app/version.sh@18 -- # get_header_version minor 00:08:14.739 12:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # cut -f2 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.739 12:22:20 version -- app/version.sh@18 -- # minor=1 00:08:14.739 12:22:20 version -- app/version.sh@19 -- # get_header_version patch 00:08:14.739 12:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # cut -f2 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.739 12:22:20 version -- app/version.sh@19 -- # patch=0 00:08:14.739 12:22:20 version -- app/version.sh@20 -- # get_header_version suffix 00:08:14.739 12:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # cut -f2 00:08:14.739 12:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.739 12:22:20 version -- app/version.sh@20 -- # suffix=-pre 00:08:14.739 12:22:20 version -- app/version.sh@22 -- # version=25.1 00:08:14.739 12:22:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:14.739 12:22:20 version -- app/version.sh@28 -- # version=25.1rc0 00:08:14.739 12:22:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:14.739 12:22:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:14.739 12:22:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:14.739 12:22:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:14.739 00:08:14.739 real 0m0.237s 00:08:14.739 user 0m0.152s 00:08:14.739 sys 0m0.111s 00:08:14.740 12:22:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.740 12:22:20 version -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 ************************************ 00:08:14.740 END TEST version 00:08:14.740 ************************************ 00:08:14.740 12:22:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:14.740 12:22:20 -- spdk/autotest.sh@194 -- # uname -s 00:08:14.740 12:22:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:14.740 12:22:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:14.740 12:22:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:14.740 12:22:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:14.740 12:22:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.740 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 12:22:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:14.740 12:22:20 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:08:14.740 12:22:20 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:14.740 12:22:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.740 12:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.740 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 ************************************ 00:08:14.740 START TEST nvmf_rdma 00:08:14.740 ************************************ 00:08:14.740 12:22:20 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:14.998 * Looking for test storage... 00:08:14.998 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.998 12:22:20 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.998 --rc genhtml_branch_coverage=1 00:08:14.998 --rc genhtml_function_coverage=1 00:08:14.998 --rc genhtml_legend=1 00:08:14.998 --rc geninfo_all_blocks=1 00:08:14.998 --rc geninfo_unexecuted_blocks=1 00:08:14.998 00:08:14.998 ' 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.998 --rc genhtml_branch_coverage=1 00:08:14.998 --rc genhtml_function_coverage=1 00:08:14.998 --rc genhtml_legend=1 00:08:14.998 --rc geninfo_all_blocks=1 00:08:14.998 --rc geninfo_unexecuted_blocks=1 00:08:14.998 00:08:14.998 ' 00:08:14.998 12:22:20 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.998 --rc genhtml_branch_coverage=1 00:08:14.998 --rc genhtml_function_coverage=1 00:08:14.999 --rc genhtml_legend=1 00:08:14.999 --rc geninfo_all_blocks=1 00:08:14.999 --rc geninfo_unexecuted_blocks=1 00:08:14.999 00:08:14.999 ' 00:08:14.999 12:22:20 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.999 --rc genhtml_branch_coverage=1 00:08:14.999 --rc genhtml_function_coverage=1 00:08:14.999 --rc genhtml_legend=1 00:08:14.999 --rc geninfo_all_blocks=1 00:08:14.999 --rc geninfo_unexecuted_blocks=1 00:08:14.999 00:08:14.999 ' 00:08:14.999 12:22:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:14.999 12:22:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:14.999 12:22:20 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:14.999 12:22:20 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.999 12:22:20 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.999 12:22:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:14.999 ************************************ 00:08:14.999 START TEST nvmf_target_core 00:08:14.999 ************************************ 00:08:14.999 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:14.999 * Looking for test storage... 00:08:14.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:14.999 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.999 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.999 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:15.257 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.258 --rc genhtml_branch_coverage=1 00:08:15.258 --rc genhtml_function_coverage=1 00:08:15.258 --rc genhtml_legend=1 00:08:15.258 --rc geninfo_all_blocks=1 00:08:15.258 --rc geninfo_unexecuted_blocks=1 00:08:15.258 00:08:15.258 ' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.258 --rc genhtml_branch_coverage=1 00:08:15.258 --rc genhtml_function_coverage=1 00:08:15.258 --rc genhtml_legend=1 00:08:15.258 --rc geninfo_all_blocks=1 00:08:15.258 --rc geninfo_unexecuted_blocks=1 00:08:15.258 00:08:15.258 ' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.258 --rc genhtml_branch_coverage=1 00:08:15.258 --rc genhtml_function_coverage=1 00:08:15.258 --rc genhtml_legend=1 00:08:15.258 --rc geninfo_all_blocks=1 00:08:15.258 --rc geninfo_unexecuted_blocks=1 00:08:15.258 00:08:15.258 ' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.258 --rc genhtml_branch_coverage=1 00:08:15.258 --rc genhtml_function_coverage=1 00:08:15.258 --rc genhtml_legend=1 00:08:15.258 --rc geninfo_all_blocks=1 00:08:15.258 --rc geninfo_unexecuted_blocks=1 00:08:15.258 00:08:15.258 ' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.258 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.259 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.259 ************************************ 00:08:15.259 START TEST nvmf_abort 00:08:15.259 ************************************ 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:15.259 * Looking for test storage... 00:08:15.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.259 12:22:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.519 --rc genhtml_branch_coverage=1 00:08:15.519 --rc genhtml_function_coverage=1 00:08:15.519 --rc genhtml_legend=1 00:08:15.519 --rc geninfo_all_blocks=1 00:08:15.519 --rc geninfo_unexecuted_blocks=1 00:08:15.519 00:08:15.519 ' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.519 --rc genhtml_branch_coverage=1 00:08:15.519 --rc genhtml_function_coverage=1 00:08:15.519 --rc genhtml_legend=1 00:08:15.519 --rc geninfo_all_blocks=1 00:08:15.519 --rc geninfo_unexecuted_blocks=1 00:08:15.519 00:08:15.519 ' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.519 --rc genhtml_branch_coverage=1 00:08:15.519 --rc genhtml_function_coverage=1 00:08:15.519 --rc genhtml_legend=1 00:08:15.519 --rc geninfo_all_blocks=1 00:08:15.519 --rc geninfo_unexecuted_blocks=1 00:08:15.519 00:08:15.519 ' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.519 --rc genhtml_branch_coverage=1 00:08:15.519 --rc genhtml_function_coverage=1 00:08:15.519 --rc genhtml_legend=1 00:08:15.519 --rc geninfo_all_blocks=1 00:08:15.519 --rc geninfo_unexecuted_blocks=1 00:08:15.519 00:08:15.519 ' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.519 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.520 12:22:21 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:08:18.063 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:08:18.063 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:08:18.063 Found net devices under 0000:83:00.0: mlx_0_0 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:08:18.063 Found net devices under 0000:83:00.1: mlx_0_1 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:18.063 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:18.064 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.064 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:08:18.064 altname enp131s0f0np0 00:08:18.064 inet 192.168.100.8/24 scope global mlx_0_0 00:08:18.064 valid_lft forever preferred_lft forever 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:18.064 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.064 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:08:18.064 altname enp131s0f1np1 00:08:18.064 inet 192.168.100.9/24 scope global mlx_0_1 00:08:18.064 valid_lft forever preferred_lft forever 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2701119 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2701119 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2701119 ']' 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.064 12:22:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.064 [2024-11-20 12:22:23.638815] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:18.064 [2024-11-20 12:22:23.638991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.064 [2024-11-20 12:22:23.794579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.324 [2024-11-20 12:22:23.904421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.324 [2024-11-20 12:22:23.904552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.324 [2024-11-20 12:22:23.904589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.324 [2024-11-20 12:22:23.904618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.324 [2024-11-20 12:22:23.904645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.324 [2024-11-20 12:22:23.906268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.324 [2024-11-20 12:22:23.906341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.324 [2024-11-20 12:22:23.906345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.259 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.259 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:19.259 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.259 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.259 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 [2024-11-20 12:22:24.767663] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c91590/0x1c95a80) succeed. 00:08:19.260 [2024-11-20 12:22:24.789558] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c92b80/0x1cd7120) succeed. 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 Malloc0 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 Delay0 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 [2024-11-20 12:22:24.994384] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.260 12:22:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.260 12:22:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.260 12:22:25 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:19.518 [2024-11-20 12:22:25.112367] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:22.046 Initializing NVMe Controllers 00:08:22.046 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:22.046 controller IO queue size 128 less than required 00:08:22.046 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:22.046 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:22.046 Initialization complete. Launching workers. 00:08:22.046 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27641 00:08:22.046 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27702, failed to submit 62 00:08:22.046 success 27642, unsuccessful 60, failed 0 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:22.046 rmmod nvme_rdma 00:08:22.046 rmmod nvme_fabrics 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2701119 ']' 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2701119 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2701119 ']' 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2701119 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:22.046 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701119 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701119' 00:08:22.047 killing process with pid 2701119 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2701119 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2701119 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:22.047 00:08:22.047 real 0m6.798s 00:08:22.047 user 0m14.393s 00:08:22.047 sys 0m2.451s 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:22.047 ************************************ 00:08:22.047 END TEST nvmf_abort 00:08:22.047 ************************************ 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.047 ************************************ 00:08:22.047 START TEST nvmf_ns_hotplug_stress 00:08:22.047 ************************************ 00:08:22.047 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:22.047 * Looking for test storage... 00:08:22.309 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.309 --rc genhtml_branch_coverage=1 00:08:22.309 --rc genhtml_function_coverage=1 00:08:22.309 --rc genhtml_legend=1 00:08:22.309 --rc geninfo_all_blocks=1 00:08:22.309 --rc geninfo_unexecuted_blocks=1 00:08:22.309 00:08:22.309 ' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.309 --rc genhtml_branch_coverage=1 00:08:22.309 --rc genhtml_function_coverage=1 00:08:22.309 --rc genhtml_legend=1 00:08:22.309 --rc geninfo_all_blocks=1 00:08:22.309 --rc geninfo_unexecuted_blocks=1 00:08:22.309 00:08:22.309 ' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.309 --rc genhtml_branch_coverage=1 00:08:22.309 --rc genhtml_function_coverage=1 00:08:22.309 --rc genhtml_legend=1 00:08:22.309 --rc geninfo_all_blocks=1 00:08:22.309 --rc geninfo_unexecuted_blocks=1 00:08:22.309 00:08:22.309 ' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.309 --rc genhtml_branch_coverage=1 00:08:22.309 --rc genhtml_function_coverage=1 00:08:22.309 --rc genhtml_legend=1 00:08:22.309 --rc geninfo_all_blocks=1 00:08:22.309 --rc geninfo_unexecuted_blocks=1 00:08:22.309 00:08:22.309 ' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.309 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.310 12:22:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.850 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:08:24.851 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:08:24.851 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:08:24.851 Found net devices under 0000:83:00.0: mlx_0_0 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:08:24.851 Found net devices under 0000:83:00.1: mlx_0_1 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:24.851 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:24.851 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.851 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:08:24.852 altname enp131s0f0np0 00:08:24.852 inet 192.168.100.8/24 scope global mlx_0_0 00:08:24.852 valid_lft forever preferred_lft forever 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:24.852 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.852 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:08:24.852 altname enp131s0f1np1 00:08:24.852 inet 192.168.100.9/24 scope global mlx_0_1 00:08:24.852 valid_lft forever preferred_lft forever 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:24.852 192.168.100.9' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:24.852 192.168.100.9' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:24.852 192.168.100.9' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2702823 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2702823 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2702823 ']' 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.852 [2024-11-20 12:22:30.330774] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:24.852 [2024-11-20 12:22:30.330881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.852 [2024-11-20 12:22:30.403204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.852 [2024-11-20 12:22:30.466659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.852 [2024-11-20 12:22:30.466719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.852 [2024-11-20 12:22:30.466735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.852 [2024-11-20 12:22:30.466748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.852 [2024-11-20 12:22:30.466760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.852 [2024-11-20 12:22:30.467997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.852 [2024-11-20 12:22:30.468082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.852 [2024-11-20 12:22:30.468104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.852 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.110 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.110 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:25.110 12:22:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:25.369 [2024-11-20 12:22:30.977784] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9e5590/0x9e9a80) succeed. 00:08:25.369 [2024-11-20 12:22:30.992315] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9e6b80/0xa2b120) succeed. 00:08:25.627 12:22:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:25.894 12:22:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.152 [2024-11-20 12:22:31.798980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.152 12:22:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:26.411 12:22:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:26.978 Malloc0 00:08:26.978 12:22:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:27.237 Delay0 00:08:27.237 12:22:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.495 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:27.753 NULL1 00:08:27.753 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:28.320 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2703138 00:08:28.320 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:28.320 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:28.320 12:22:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.255 Read completed with error (sct=0, sc=11) 00:08:29.513 12:22:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.771 12:22:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:29.771 12:22:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:30.029 true 00:08:30.029 12:22:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:30.029 12:22:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 12:22:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.112 12:22:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:31.112 12:22:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:31.370 true 00:08:31.370 12:22:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:31.370 12:22:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 12:22:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.305 12:22:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:32.305 12:22:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:32.873 true 00:08:32.873 12:22:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:32.873 12:22:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.440 12:22:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.697 12:22:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:33.697 12:22:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:33.954 true 00:08:34.213 12:22:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:34.213 12:22:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.471 12:22:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.729 12:22:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:34.729 12:22:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:34.989 true 00:08:34.989 12:22:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:34.989 12:22:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.611 12:22:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.869 12:22:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:35.869 12:22:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:36.128 true 00:08:36.128 12:22:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:36.128 12:22:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.386 12:22:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.643 12:22:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:36.643 12:22:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:37.210 true 00:08:37.210 12:22:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:37.210 12:22:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.469 12:22:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.727 12:22:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:37.727 12:22:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:37.985 true 00:08:37.985 12:22:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:37.985 12:22:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.551 12:22:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.809 12:22:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:38.809 12:22:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:39.068 true 00:08:39.068 12:22:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:39.068 12:22:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.326 12:22:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.893 12:22:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:39.893 12:22:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:40.151 true 00:08:40.151 12:22:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:40.151 12:22:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.410 12:22:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.668 12:22:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:40.668 12:22:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:41.235 true 00:08:41.235 12:22:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:41.235 12:22:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.493 12:22:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.751 12:22:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:41.751 12:22:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:42.010 true 00:08:42.010 12:22:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:42.010 12:22:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.578 12:22:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.836 12:22:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:42.836 12:22:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:43.094 true 00:08:43.094 12:22:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:43.094 12:22:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.353 12:22:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.917 12:22:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:43.917 12:22:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:44.175 true 00:08:44.175 12:22:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:44.175 12:22:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.433 12:22:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.690 12:22:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:44.690 12:22:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:45.256 true 00:08:45.256 12:22:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:45.256 12:22:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.514 12:22:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.773 12:22:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:45.773 12:22:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:46.031 true 00:08:46.031 12:22:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:46.031 12:22:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.597 12:22:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.855 12:22:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:46.855 12:22:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:47.113 true 00:08:47.113 12:22:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:47.113 12:22:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.371 12:22:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.937 12:22:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:47.937 12:22:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:48.196 true 00:08:48.196 12:22:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:48.196 12:22:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.454 12:22:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.713 12:22:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:48.713 12:22:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:49.279 true 00:08:49.279 12:22:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:49.279 12:22:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.538 12:22:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.797 12:22:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:49.797 12:22:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:50.055 true 00:08:50.055 12:22:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:50.055 12:22:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.314 12:22:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.880 12:22:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:50.881 12:22:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:51.139 true 00:08:51.139 12:22:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:51.139 12:22:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.397 12:22:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.963 12:22:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:51.963 12:22:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:52.221 true 00:08:52.221 12:22:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:52.221 12:22:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.479 12:22:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.737 12:22:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:52.737 12:22:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:52.995 true 00:08:52.995 12:22:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:52.995 12:22:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.561 12:22:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.818 12:22:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:53.818 12:22:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:54.076 true 00:08:54.076 12:22:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:54.076 12:22:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.333 12:23:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.899 12:23:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:54.899 12:23:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:55.156 true 00:08:55.156 12:23:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:55.156 12:23:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.413 12:23:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.670 12:23:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:55.670 12:23:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:56.235 true 00:08:56.235 12:23:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:56.235 12:23:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.492 12:23:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.750 12:23:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:56.750 12:23:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:57.006 true 00:08:57.006 12:23:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:57.006 12:23:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.571 12:23:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.829 12:23:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:57.829 12:23:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:58.087 true 00:08:58.087 12:23:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:58.087 12:23:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.347 12:23:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.913 12:23:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:58.913 12:23:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:59.171 true 00:08:59.171 12:23:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:08:59.171 12:23:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.429 12:23:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.687 12:23:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:59.687 12:23:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:00.253 true 00:09:00.253 12:23:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:09:00.253 12:23:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.511 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.511 Initializing NVMe Controllers 00:09:00.511 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.511 Controller IO queue size 128, less than required. 00:09:00.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.511 Controller IO queue size 128, less than required. 00:09:00.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:00.511 Initialization complete. Launching workers. 00:09:00.511 ======================================================== 00:09:00.511 Latency(us) 00:09:00.511 Device Information : IOPS MiB/s Average min max 00:09:00.511 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1138.92 0.56 19356.16 1639.80 1009528.07 00:09:00.511 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4855.82 2.37 26360.69 2767.51 318757.55 00:09:00.511 ======================================================== 00:09:00.511 Total : 5994.74 2.93 25029.92 1639.80 1009528.07 00:09:00.511 00:09:00.769 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:00.769 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:01.027 true 00:09:01.027 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2703138 00:09:01.027 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2703138) - No such process 00:09:01.027 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2703138 00:09:01.027 12:23:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.593 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.851 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:01.851 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:01.851 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:01.851 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.851 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:02.109 null0 00:09:02.109 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:02.109 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:02.109 12:23:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:02.367 null1 00:09:02.367 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:02.367 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:02.367 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:02.932 null2 00:09:02.932 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:02.932 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:02.932 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:03.189 null3 00:09:03.189 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:03.190 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.190 12:23:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:03.447 null4 00:09:03.447 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:03.447 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.447 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:03.705 null5 00:09:03.705 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:03.705 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.705 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:04.300 null6 00:09:04.300 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.300 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.300 12:23:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:04.580 null7 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2706403 2706404 2706405 2706408 2706410 2706412 2706414 2706416 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.580 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:04.838 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.838 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.838 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.838 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.838 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.839 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.839 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.839 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.097 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.098 12:23:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.356 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.925 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.183 12:23:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.475 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:06.734 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.993 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.252 12:23:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.512 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.771 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.031 12:23:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.598 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.599 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.857 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.115 12:23:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.373 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.941 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.942 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:10.200 12:23:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.458 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.459 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:10.717 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:11.284 rmmod nvme_rdma 00:09:11.284 rmmod nvme_fabrics 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2702823 ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2702823 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2702823 ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2702823 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2702823 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2702823' 00:09:11.284 killing process with pid 2702823 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2702823 00:09:11.284 12:23:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2702823 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:11.543 00:09:11.543 real 0m49.483s 00:09:11.543 user 4m9.364s 00:09:11.543 sys 0m12.513s 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.543 ************************************ 00:09:11.543 END TEST nvmf_ns_hotplug_stress 00:09:11.543 ************************************ 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.543 ************************************ 00:09:11.543 START TEST nvmf_delete_subsystem 00:09:11.543 ************************************ 00:09:11.543 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:11.802 * Looking for test storage... 00:09:11.802 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.802 --rc genhtml_branch_coverage=1 00:09:11.802 --rc genhtml_function_coverage=1 00:09:11.802 --rc genhtml_legend=1 00:09:11.802 --rc geninfo_all_blocks=1 00:09:11.802 --rc geninfo_unexecuted_blocks=1 00:09:11.802 00:09:11.802 ' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.802 --rc genhtml_branch_coverage=1 00:09:11.802 --rc genhtml_function_coverage=1 00:09:11.802 --rc genhtml_legend=1 00:09:11.802 --rc geninfo_all_blocks=1 00:09:11.802 --rc geninfo_unexecuted_blocks=1 00:09:11.802 00:09:11.802 ' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.802 --rc genhtml_branch_coverage=1 00:09:11.802 --rc genhtml_function_coverage=1 00:09:11.802 --rc genhtml_legend=1 00:09:11.802 --rc geninfo_all_blocks=1 00:09:11.802 --rc geninfo_unexecuted_blocks=1 00:09:11.802 00:09:11.802 ' 00:09:11.802 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.802 --rc genhtml_branch_coverage=1 00:09:11.802 --rc genhtml_function_coverage=1 00:09:11.803 --rc genhtml_legend=1 00:09:11.803 --rc geninfo_all_blocks=1 00:09:11.803 --rc geninfo_unexecuted_blocks=1 00:09:11.803 00:09:11.803 ' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.803 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.803 12:23:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:09:14.338 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:09:14.338 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:09:14.338 Found net devices under 0000:83:00.0: mlx_0_0 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:09:14.338 Found net devices under 0000:83:00.1: mlx_0_1 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:14.338 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:14.339 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.339 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:09:14.339 altname enp131s0f0np0 00:09:14.339 inet 192.168.100.8/24 scope global mlx_0_0 00:09:14.339 valid_lft forever preferred_lft forever 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:14.339 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.339 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:09:14.339 altname enp131s0f1np1 00:09:14.339 inet 192.168.100.9/24 scope global mlx_0_1 00:09:14.339 valid_lft forever preferred_lft forever 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:14.339 192.168.100.9' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:14.339 192.168.100.9' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:14.339 192.168.100.9' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.339 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2708535 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2708535 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2708535 ']' 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.340 12:23:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.340 [2024-11-20 12:23:19.919255] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:14.340 [2024-11-20 12:23:19.919370] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.340 [2024-11-20 12:23:19.995054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.340 [2024-11-20 12:23:20.059458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.340 [2024-11-20 12:23:20.059527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.340 [2024-11-20 12:23:20.059552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.340 [2024-11-20 12:23:20.059566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.340 [2024-11-20 12:23:20.059584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.340 [2024-11-20 12:23:20.063536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.340 [2024-11-20 12:23:20.063550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.597 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.597 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:14.597 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.598 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.598 [2024-11-20 12:23:20.306638] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16fa560/0x16fea50) succeed. 00:09:14.598 [2024-11-20 12:23:20.320541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16fbab0/0x17400f0) succeed. 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.855 [2024-11-20 12:23:20.427540] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.855 NULL1 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.855 Delay0 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.855 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2708644 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:14.856 12:23:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:14.856 [2024-11-20 12:23:20.550892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:16.775 12:23:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.775 12:23:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.775 12:23:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 NVMe io qpair process completion error 00:09:18.172 12:23:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.172 12:23:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:18.172 12:23:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2708644 00:09:18.172 12:23:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:18.429 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:18.429 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2708644 00:09:18.429 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 starting I/O failed: -6 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Write completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.995 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 starting I/O failed: -6 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Read completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.996 Write completed with error (sct=0, sc=8) 00:09:18.997 Write completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 Write completed with error (sct=0, sc=8) 00:09:18.997 Write completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 Write completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 Read completed with error (sct=0, sc=8) 00:09:18.997 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:18.997 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2708644 00:09:18.997 12:23:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:18.997 Initializing NVMe Controllers 00:09:18.997 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:18.997 Controller IO queue size 128, less than required. 00:09:18.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:18.997 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:18.997 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:18.997 Initialization complete. Launching workers. 00:09:18.997 ======================================================== 00:09:18.997 Latency(us) 00:09:18.997 Device Information : IOPS MiB/s Average min max 00:09:18.997 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.54 0.04 1594202.89 1000105.67 2977105.61 00:09:18.997 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.54 0.04 1593261.25 1001074.05 2969356.33 00:09:18.997 ======================================================== 00:09:18.997 Total : 161.08 0.08 1593732.07 1000105.67 2977105.61 00:09:18.997 00:09:18.997 [2024-11-20 12:23:24.650426] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:09:18.997 [2024-11-20 12:23:24.671522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:18.997 [2024-11-20 12:23:24.671561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:09:18.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2708644 00:09:19.562 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2708644) - No such process 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2708644 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2708644 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2708644 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.562 [2024-11-20 12:23:25.170929] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2709037 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.562 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:19.562 [2024-11-20 12:23:25.286632] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:20.127 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.127 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:20.127 12:23:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.692 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.692 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:20.692 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.950 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.950 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:20.950 12:23:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.516 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.516 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:21.516 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.081 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.081 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:22.081 12:23:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.648 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.648 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:22.648 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.214 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.214 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:23.214 12:23:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.472 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.472 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:23.472 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.038 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.038 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:24.038 12:23:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.605 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.605 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:24.605 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.170 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.170 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:25.170 12:23:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.736 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.736 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:25.736 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.994 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.994 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:25.994 12:23:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.560 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.560 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:26.560 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.819 Initializing NVMe Controllers 00:09:26.819 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.819 Controller IO queue size 128, less than required. 00:09:26.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.819 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:26.819 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:26.819 Initialization complete. Launching workers. 00:09:26.819 ======================================================== 00:09:26.819 Latency(us) 00:09:26.819 Device Information : IOPS MiB/s Average min max 00:09:26.819 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003389.83 1000080.24 1007144.22 00:09:26.819 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001719.52 1000072.62 1005503.86 00:09:26.819 ======================================================== 00:09:26.819 Total : 256.00 0.12 1002554.68 1000072.62 1007144.22 00:09:26.819 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2709037 00:09:27.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2709037) - No such process 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2709037 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:27.078 rmmod nvme_rdma 00:09:27.078 rmmod nvme_fabrics 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2708535 ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2708535 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2708535 ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2708535 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708535 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708535' 00:09:27.078 killing process with pid 2708535 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2708535 00:09:27.078 12:23:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2708535 00:09:27.337 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.337 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:27.337 00:09:27.337 real 0m15.826s 00:09:27.337 user 0m48.230s 00:09:27.337 sys 0m2.896s 00:09:27.337 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.337 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.337 ************************************ 00:09:27.337 END TEST nvmf_delete_subsystem 00:09:27.337 ************************************ 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.597 ************************************ 00:09:27.597 START TEST nvmf_host_management 00:09:27.597 ************************************ 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:27.597 * Looking for test storage... 00:09:27.597 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.597 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.856 --rc genhtml_branch_coverage=1 00:09:27.856 --rc genhtml_function_coverage=1 00:09:27.856 --rc genhtml_legend=1 00:09:27.856 --rc geninfo_all_blocks=1 00:09:27.856 --rc geninfo_unexecuted_blocks=1 00:09:27.856 00:09:27.856 ' 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.856 --rc genhtml_branch_coverage=1 00:09:27.856 --rc genhtml_function_coverage=1 00:09:27.856 --rc genhtml_legend=1 00:09:27.856 --rc geninfo_all_blocks=1 00:09:27.856 --rc geninfo_unexecuted_blocks=1 00:09:27.856 00:09:27.856 ' 00:09:27.856 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.856 --rc genhtml_branch_coverage=1 00:09:27.856 --rc genhtml_function_coverage=1 00:09:27.856 --rc genhtml_legend=1 00:09:27.856 --rc geninfo_all_blocks=1 00:09:27.856 --rc geninfo_unexecuted_blocks=1 00:09:27.856 00:09:27.856 ' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.857 --rc genhtml_branch_coverage=1 00:09:27.857 --rc genhtml_function_coverage=1 00:09:27.857 --rc genhtml_legend=1 00:09:27.857 --rc geninfo_all_blocks=1 00:09:27.857 --rc geninfo_unexecuted_blocks=1 00:09:27.857 00:09:27.857 ' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.857 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.857 12:23:33 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.393 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:09:30.393 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:09:30.394 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:09:30.394 Found net devices under 0000:83:00.0: mlx_0_0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:09:30.394 Found net devices under 0000:83:00.1: mlx_0_1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:30.394 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.394 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:09:30.394 altname enp131s0f0np0 00:09:30.394 inet 192.168.100.8/24 scope global mlx_0_0 00:09:30.394 valid_lft forever preferred_lft forever 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:30.394 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.394 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:09:30.394 altname enp131s0f1np1 00:09:30.394 inet 192.168.100.9/24 scope global mlx_0_1 00:09:30.394 valid_lft forever preferred_lft forever 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:30.394 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:30.395 192.168.100.9' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:30.395 192.168.100.9' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:30.395 192.168.100.9' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2711107 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2711107 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2711107 ']' 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.395 12:23:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.395 [2024-11-20 12:23:35.881991] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:30.395 [2024-11-20 12:23:35.882101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.395 [2024-11-20 12:23:35.954976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.395 [2024-11-20 12:23:36.020991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.395 [2024-11-20 12:23:36.021052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.395 [2024-11-20 12:23:36.021069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.395 [2024-11-20 12:23:36.021082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.395 [2024-11-20 12:23:36.021093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.395 [2024-11-20 12:23:36.022420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.395 [2024-11-20 12:23:36.022470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.395 [2024-11-20 12:23:36.022521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.395 [2024-11-20 12:23:36.022525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.395 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.395 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:30.395 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.395 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.395 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 [2024-11-20 12:23:36.212548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c160d0/0x1c1a5c0) succeed. 00:09:30.654 [2024-11-20 12:23:36.228482] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c17760/0x1c5bc60) succeed. 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.654 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.912 Malloc0 00:09:30.912 [2024-11-20 12:23:36.467579] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2711155 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2711155 /var/tmp/bdevperf.sock 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2711155 ']' 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.912 { 00:09:30.912 "params": { 00:09:30.912 "name": "Nvme$subsystem", 00:09:30.912 "trtype": "$TEST_TRANSPORT", 00:09:30.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.912 "adrfam": "ipv4", 00:09:30.912 "trsvcid": "$NVMF_PORT", 00:09:30.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.912 "hdgst": ${hdgst:-false}, 00:09:30.912 "ddgst": ${ddgst:-false} 00:09:30.912 }, 00:09:30.912 "method": "bdev_nvme_attach_controller" 00:09:30.912 } 00:09:30.912 EOF 00:09:30.912 )") 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:30.912 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.912 "params": { 00:09:30.912 "name": "Nvme0", 00:09:30.912 "trtype": "rdma", 00:09:30.912 "traddr": "192.168.100.8", 00:09:30.912 "adrfam": "ipv4", 00:09:30.912 "trsvcid": "4420", 00:09:30.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:30.912 "hdgst": false, 00:09:30.912 "ddgst": false 00:09:30.912 }, 00:09:30.912 "method": "bdev_nvme_attach_controller" 00:09:30.912 }' 00:09:30.912 [2024-11-20 12:23:36.562735] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:30.912 [2024-11-20 12:23:36.562829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711155 ] 00:09:30.912 [2024-11-20 12:23:36.637038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.170 [2024-11-20 12:23:36.699947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.170 Running I/O for 10 seconds... 00:09:31.170 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.170 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:31.170 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:31.170 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.170 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.428 12:23:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=115 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 115 -ge 100 ']' 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 12:23:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:32.364 192.00 IOPS, 12.00 MiB/s [2024-11-20T11:23:38.130Z] [2024-11-20 12:23:38.026402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x181e00 00:09:32.364 [2024-11-20 12:23:38.026866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x182b00 00:09:32.364 [2024-11-20 12:23:38.026898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x182b00 00:09:32.364 [2024-11-20 12:23:38.026930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x181f00 00:09:32.364 [2024-11-20 12:23:38.026961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.026978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a40000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.026993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a61000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a82000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ae5000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b06000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b27000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b48000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b69000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182900 00:09:32.364 [2024-11-20 12:23:38.027484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.364 [2024-11-20 12:23:38.027503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000988f000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000986e000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000984d000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000982c000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000980b000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000097ea000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000097c9000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000097a8000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009787000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009766000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009745000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009724000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009703000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000096e2000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000096c1000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.027973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.027989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000096a0000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a9f000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a7e000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a5d000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a3c000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009a1b000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099fa000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099d9000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000099b8000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009997000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009976000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009955000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009934000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009913000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000098f2000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000098d1000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 [2024-11-20 12:23:38.028517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000098b0000 len:0x10000 key:0x182900 00:09:32.365 [2024-11-20 12:23:38.028535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4b157000 sqhd:8250 p:0 m:0 dnr:0 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2711155 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:32.365 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:32.365 { 00:09:32.365 "params": { 00:09:32.365 "name": "Nvme$subsystem", 00:09:32.365 "trtype": "$TEST_TRANSPORT", 00:09:32.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.365 "adrfam": "ipv4", 00:09:32.365 "trsvcid": "$NVMF_PORT", 00:09:32.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.366 "hdgst": ${hdgst:-false}, 00:09:32.366 "ddgst": ${ddgst:-false} 00:09:32.366 }, 00:09:32.366 "method": "bdev_nvme_attach_controller" 00:09:32.366 } 00:09:32.366 EOF 00:09:32.366 )") 00:09:32.366 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:32.366 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:32.366 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:32.366 12:23:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:32.366 "params": { 00:09:32.366 "name": "Nvme0", 00:09:32.366 "trtype": "rdma", 00:09:32.366 "traddr": "192.168.100.8", 00:09:32.366 "adrfam": "ipv4", 00:09:32.366 "trsvcid": "4420", 00:09:32.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:32.366 "hdgst": false, 00:09:32.366 "ddgst": false 00:09:32.366 }, 00:09:32.366 "method": "bdev_nvme_attach_controller" 00:09:32.366 }' 00:09:32.366 [2024-11-20 12:23:38.088879] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:32.366 [2024-11-20 12:23:38.088975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711360 ] 00:09:32.624 [2024-11-20 12:23:38.161792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.624 [2024-11-20 12:23:38.226126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.885 Running I/O for 1 seconds... 00:09:33.820 2032.00 IOPS, 127.00 MiB/s 00:09:33.820 Latency(us) 00:09:33.820 [2024-11-20T11:23:39.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.820 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:33.820 Verification LBA range: start 0x0 length 0x400 00:09:33.820 Nvme0n1 : 1.01 2067.80 129.24 0.00 0.00 30197.30 1601.99 46215.02 00:09:33.820 [2024-11-20T11:23:39.586Z] =================================================================================================================== 00:09:33.820 [2024-11-20T11:23:39.586Z] Total : 2067.80 129.24 0.00 0.00 30197.30 1601.99 46215.02 00:09:34.078 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2711155 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:34.078 rmmod nvme_rdma 00:09:34.078 rmmod nvme_fabrics 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2711107 ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2711107 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2711107 ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2711107 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2711107 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2711107' 00:09:34.078 killing process with pid 2711107 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2711107 00:09:34.078 12:23:39 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2711107 00:09:34.338 [2024-11-20 12:23:40.040451] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:34.338 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.338 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:34.338 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:34.338 00:09:34.338 real 0m6.982s 00:09:34.338 user 0m19.792s 00:09:34.338 sys 0m2.787s 00:09:34.338 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.338 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.338 ************************************ 00:09:34.338 END TEST nvmf_host_management 00:09:34.338 ************************************ 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.599 ************************************ 00:09:34.599 START TEST nvmf_lvol 00:09:34.599 ************************************ 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:34.599 * Looking for test storage... 00:09:34.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.599 --rc genhtml_branch_coverage=1 00:09:34.599 --rc genhtml_function_coverage=1 00:09:34.599 --rc genhtml_legend=1 00:09:34.599 --rc geninfo_all_blocks=1 00:09:34.599 --rc geninfo_unexecuted_blocks=1 00:09:34.599 00:09:34.599 ' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.599 --rc genhtml_branch_coverage=1 00:09:34.599 --rc genhtml_function_coverage=1 00:09:34.599 --rc genhtml_legend=1 00:09:34.599 --rc geninfo_all_blocks=1 00:09:34.599 --rc geninfo_unexecuted_blocks=1 00:09:34.599 00:09:34.599 ' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.599 --rc genhtml_branch_coverage=1 00:09:34.599 --rc genhtml_function_coverage=1 00:09:34.599 --rc genhtml_legend=1 00:09:34.599 --rc geninfo_all_blocks=1 00:09:34.599 --rc geninfo_unexecuted_blocks=1 00:09:34.599 00:09:34.599 ' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.599 --rc genhtml_branch_coverage=1 00:09:34.599 --rc genhtml_function_coverage=1 00:09:34.599 --rc genhtml_legend=1 00:09:34.599 --rc geninfo_all_blocks=1 00:09:34.599 --rc geninfo_unexecuted_blocks=1 00:09:34.599 00:09:34.599 ' 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.599 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.600 12:23:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.256 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:09:37.257 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:09:37.257 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:09:37.257 Found net devices under 0000:83:00.0: mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:09:37.257 Found net devices under 0000:83:00.1: mlx_0_1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:37.257 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.257 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:09:37.257 altname enp131s0f0np0 00:09:37.257 inet 192.168.100.8/24 scope global mlx_0_0 00:09:37.257 valid_lft forever preferred_lft forever 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:37.257 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.257 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:09:37.257 altname enp131s0f1np1 00:09:37.257 inet 192.168.100.9/24 scope global mlx_0_1 00:09:37.257 valid_lft forever preferred_lft forever 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:37.257 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:37.258 192.168.100.9' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:37.258 192.168.100.9' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:37.258 192.168.100.9' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2712855 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2712855 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2712855 ']' 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.258 12:23:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.258 [2024-11-20 12:23:42.827814] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:37.258 [2024-11-20 12:23:42.827983] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.258 [2024-11-20 12:23:42.934531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.258 [2024-11-20 12:23:42.996440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.258 [2024-11-20 12:23:42.996509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.258 [2024-11-20 12:23:42.996526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.258 [2024-11-20 12:23:42.996545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.258 [2024-11-20 12:23:42.996557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.258 [2024-11-20 12:23:42.997771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.258 [2024-11-20 12:23:42.997890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.258 [2024-11-20 12:23:42.997925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.517 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:37.776 [2024-11-20 12:23:43.469110] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x252b260/0x252f750) succeed. 00:09:37.776 [2024-11-20 12:23:43.483467] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x252c850/0x2570df0) succeed. 00:09:38.033 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.292 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:38.292 12:23:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.859 12:23:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:38.859 12:23:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:39.116 12:23:44 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:39.374 12:23:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9ffb918e-92b2-463e-aa3e-a7fe74041f5f 00:09:39.375 12:23:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ffb918e-92b2-463e-aa3e-a7fe74041f5f lvol 20 00:09:39.632 12:23:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=37beeaaa-3af6-4184-8eb4-26390bbf5817 00:09:39.632 12:23:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:40.198 12:23:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37beeaaa-3af6-4184-8eb4-26390bbf5817 00:09:40.456 12:23:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:40.714 [2024-11-20 12:23:46.345140] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:40.714 12:23:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:40.972 12:23:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2713215 00:09:40.972 12:23:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:40.972 12:23:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:42.347 12:23:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 37beeaaa-3af6-4184-8eb4-26390bbf5817 MY_SNAPSHOT 00:09:42.347 12:23:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b10ac93e-ecfb-4995-bab9-663bc4150d4f 00:09:42.347 12:23:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 37beeaaa-3af6-4184-8eb4-26390bbf5817 30 00:09:42.913 12:23:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b10ac93e-ecfb-4995-bab9-663bc4150d4f MY_CLONE 00:09:43.171 12:23:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=03b38c0a-e0bf-4216-908e-1eac4c8d25f8 00:09:43.171 12:23:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 03b38c0a-e0bf-4216-908e-1eac4c8d25f8 00:09:43.429 12:23:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2713215 00:09:53.398 Initializing NVMe Controllers 00:09:53.398 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:53.398 Controller IO queue size 128, less than required. 00:09:53.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.398 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:53.398 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:53.398 Initialization complete. Launching workers. 00:09:53.398 ======================================================== 00:09:53.398 Latency(us) 00:09:53.398 Device Information : IOPS MiB/s Average min max 00:09:53.398 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12716.08 49.67 10069.70 3446.69 64968.65 00:09:53.398 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12664.88 49.47 10109.50 3262.23 53930.93 00:09:53.398 ======================================================== 00:09:53.398 Total : 25380.96 99.14 10089.56 3262.23 64968.65 00:09:53.398 00:09:53.398 12:23:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:53.398 12:23:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37beeaaa-3af6-4184-8eb4-26390bbf5817 00:09:53.398 12:23:58 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ffb918e-92b2-463e-aa3e-a7fe74041f5f 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:53.398 rmmod nvme_rdma 00:09:53.398 rmmod nvme_fabrics 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2712855 ']' 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2712855 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2712855 ']' 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2712855 00:09:53.398 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:53.656 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.656 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712855 00:09:53.656 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.657 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.657 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712855' 00:09:53.657 killing process with pid 2712855 00:09:53.657 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2712855 00:09:53.657 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2712855 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:53.915 00:09:53.915 real 0m19.382s 00:09:53.915 user 1m17.063s 00:09:53.915 sys 0m3.137s 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.915 ************************************ 00:09:53.915 END TEST nvmf_lvol 00:09:53.915 ************************************ 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.915 ************************************ 00:09:53.915 START TEST nvmf_lvs_grow 00:09:53.915 ************************************ 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:53.915 * Looking for test storage... 00:09:53.915 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.915 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.176 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.177 --rc genhtml_branch_coverage=1 00:09:54.177 --rc genhtml_function_coverage=1 00:09:54.177 --rc genhtml_legend=1 00:09:54.177 --rc geninfo_all_blocks=1 00:09:54.177 --rc geninfo_unexecuted_blocks=1 00:09:54.177 00:09:54.177 ' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.177 --rc genhtml_branch_coverage=1 00:09:54.177 --rc genhtml_function_coverage=1 00:09:54.177 --rc genhtml_legend=1 00:09:54.177 --rc geninfo_all_blocks=1 00:09:54.177 --rc geninfo_unexecuted_blocks=1 00:09:54.177 00:09:54.177 ' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.177 --rc genhtml_branch_coverage=1 00:09:54.177 --rc genhtml_function_coverage=1 00:09:54.177 --rc genhtml_legend=1 00:09:54.177 --rc geninfo_all_blocks=1 00:09:54.177 --rc geninfo_unexecuted_blocks=1 00:09:54.177 00:09:54.177 ' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.177 --rc genhtml_branch_coverage=1 00:09:54.177 --rc genhtml_function_coverage=1 00:09:54.177 --rc genhtml_legend=1 00:09:54.177 --rc geninfo_all_blocks=1 00:09:54.177 --rc geninfo_unexecuted_blocks=1 00:09:54.177 00:09:54.177 ' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.177 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.177 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.178 12:23:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:09:56.087 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:09:56.087 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:09:56.087 Found net devices under 0000:83:00.0: mlx_0_0 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:09:56.087 Found net devices under 0000:83:00.1: mlx_0_1 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:56.087 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:56.088 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.088 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:09:56.088 altname enp131s0f0np0 00:09:56.088 inet 192.168.100.8/24 scope global mlx_0_0 00:09:56.088 valid_lft forever preferred_lft forever 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:56.088 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.088 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:09:56.088 altname enp131s0f1np1 00:09:56.088 inet 192.168.100.9/24 scope global mlx_0_1 00:09:56.088 valid_lft forever preferred_lft forever 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:56.088 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:56.347 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:56.348 192.168.100.9' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:56.348 192.168.100.9' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:56.348 192.168.100.9' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2715724 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2715724 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2715724 ']' 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.348 12:24:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.348 [2024-11-20 12:24:01.966264] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:56.348 [2024-11-20 12:24:01.966355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.348 [2024-11-20 12:24:02.036424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.348 [2024-11-20 12:24:02.097496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.348 [2024-11-20 12:24:02.097559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.348 [2024-11-20 12:24:02.097575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.348 [2024-11-20 12:24:02.097589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.348 [2024-11-20 12:24:02.097602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.348 [2024-11-20 12:24:02.098112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.606 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:56.865 [2024-11-20 12:24:02.609968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7659e0/0x769ed0) succeed. 00:09:56.865 [2024-11-20 12:24:02.623637] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x766e90/0x7ab570) succeed. 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.124 ************************************ 00:09:57.124 START TEST lvs_grow_clean 00:09:57.124 ************************************ 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:57.124 12:24:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.383 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:57.383 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:57.641 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:09:57.641 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:09:57.641 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:58.207 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:58.207 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:58.207 12:24:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 lvol 150 00:09:58.465 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4824bd24-8585-4c02-8c57-d9d289bc3665 00:09:58.465 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:58.465 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:58.724 [2024-11-20 12:24:04.372115] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:58.724 [2024-11-20 12:24:04.372187] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:58.724 true 00:09:58.724 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:58.724 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:09:58.982 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:58.982 12:24:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:59.548 12:24:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4824bd24-8585-4c02-8c57-d9d289bc3665 00:09:59.806 12:24:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:00.064 [2024-11-20 12:24:05.708774] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:00.064 12:24:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2716337 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2716337 /var/tmp/bdevperf.sock 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2716337 ']' 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:00.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.323 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:00.581 [2024-11-20 12:24:06.116367] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:00.581 [2024-11-20 12:24:06.116476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716337 ] 00:10:00.581 [2024-11-20 12:24:06.188795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.581 [2024-11-20 12:24:06.251575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.840 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.840 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:00.840 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:01.098 Nvme0n1 00:10:01.098 12:24:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:01.666 [ 00:10:01.666 { 00:10:01.666 "name": "Nvme0n1", 00:10:01.666 "aliases": [ 00:10:01.666 "4824bd24-8585-4c02-8c57-d9d289bc3665" 00:10:01.666 ], 00:10:01.666 "product_name": "NVMe disk", 00:10:01.666 "block_size": 4096, 00:10:01.666 "num_blocks": 38912, 00:10:01.666 "uuid": "4824bd24-8585-4c02-8c57-d9d289bc3665", 00:10:01.666 "numa_id": 1, 00:10:01.666 "assigned_rate_limits": { 00:10:01.666 "rw_ios_per_sec": 0, 00:10:01.666 "rw_mbytes_per_sec": 0, 00:10:01.666 "r_mbytes_per_sec": 0, 00:10:01.666 "w_mbytes_per_sec": 0 00:10:01.666 }, 00:10:01.666 "claimed": false, 00:10:01.666 "zoned": false, 00:10:01.666 "supported_io_types": { 00:10:01.666 "read": true, 00:10:01.666 "write": true, 00:10:01.666 "unmap": true, 00:10:01.666 "flush": true, 00:10:01.666 "reset": true, 00:10:01.666 "nvme_admin": true, 00:10:01.666 "nvme_io": true, 00:10:01.666 "nvme_io_md": false, 00:10:01.666 "write_zeroes": true, 00:10:01.666 "zcopy": false, 00:10:01.666 "get_zone_info": false, 00:10:01.666 "zone_management": false, 00:10:01.666 "zone_append": false, 00:10:01.666 "compare": true, 00:10:01.666 "compare_and_write": true, 00:10:01.666 "abort": true, 00:10:01.666 "seek_hole": false, 00:10:01.666 "seek_data": false, 00:10:01.666 "copy": true, 00:10:01.666 "nvme_iov_md": false 00:10:01.666 }, 00:10:01.666 "memory_domains": [ 00:10:01.666 { 00:10:01.666 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:01.666 "dma_device_type": 0 00:10:01.666 } 00:10:01.666 ], 00:10:01.666 "driver_specific": { 00:10:01.666 "nvme": [ 00:10:01.666 { 00:10:01.666 "trid": { 00:10:01.666 "trtype": "RDMA", 00:10:01.666 "adrfam": "IPv4", 00:10:01.666 "traddr": "192.168.100.8", 00:10:01.666 "trsvcid": "4420", 00:10:01.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:01.666 }, 00:10:01.666 "ctrlr_data": { 00:10:01.666 "cntlid": 1, 00:10:01.666 "vendor_id": "0x8086", 00:10:01.666 "model_number": "SPDK bdev Controller", 00:10:01.666 "serial_number": "SPDK0", 00:10:01.666 "firmware_revision": "25.01", 00:10:01.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:01.666 "oacs": { 00:10:01.666 "security": 0, 00:10:01.666 "format": 0, 00:10:01.666 "firmware": 0, 00:10:01.666 "ns_manage": 0 00:10:01.666 }, 00:10:01.666 "multi_ctrlr": true, 00:10:01.666 "ana_reporting": false 00:10:01.666 }, 00:10:01.666 "vs": { 00:10:01.666 "nvme_version": "1.3" 00:10:01.666 }, 00:10:01.666 "ns_data": { 00:10:01.666 "id": 1, 00:10:01.666 "can_share": true 00:10:01.666 } 00:10:01.666 } 00:10:01.666 ], 00:10:01.666 "mp_policy": "active_passive" 00:10:01.666 } 00:10:01.666 } 00:10:01.666 ] 00:10:01.666 12:24:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2716591 00:10:01.666 12:24:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:01.666 12:24:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:01.666 Running I/O for 10 seconds... 00:10:02.602 Latency(us) 00:10:02.602 [2024-11-20T11:24:08.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.602 Nvme0n1 : 1.00 19424.00 75.88 0.00 0.00 0.00 0.00 0.00 00:10:02.602 [2024-11-20T11:24:08.368Z] =================================================================================================================== 00:10:02.602 [2024-11-20T11:24:08.368Z] Total : 19424.00 75.88 0.00 0.00 0.00 0.00 0.00 00:10:02.602 00:10:03.537 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:03.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.537 Nvme0n1 : 2.00 19712.00 77.00 0.00 0.00 0.00 0.00 0.00 00:10:03.537 [2024-11-20T11:24:09.303Z] =================================================================================================================== 00:10:03.537 [2024-11-20T11:24:09.303Z] Total : 19712.00 77.00 0.00 0.00 0.00 0.00 0.00 00:10:03.537 00:10:03.796 true 00:10:03.796 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:03.796 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:04.363 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:04.363 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:04.363 12:24:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2716591 00:10:04.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.622 Nvme0n1 : 3.00 19861.33 77.58 0.00 0.00 0.00 0.00 0.00 00:10:04.622 [2024-11-20T11:24:10.388Z] =================================================================================================================== 00:10:04.622 [2024-11-20T11:24:10.388Z] Total : 19861.33 77.58 0.00 0.00 0.00 0.00 0.00 00:10:04.622 00:10:05.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.558 Nvme0n1 : 4.00 19976.00 78.03 0.00 0.00 0.00 0.00 0.00 00:10:05.558 [2024-11-20T11:24:11.324Z] =================================================================================================================== 00:10:05.558 [2024-11-20T11:24:11.324Z] Total : 19976.00 78.03 0.00 0.00 0.00 0.00 0.00 00:10:05.558 00:10:06.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.934 Nvme0n1 : 5.00 20065.00 78.38 0.00 0.00 0.00 0.00 0.00 00:10:06.934 [2024-11-20T11:24:12.700Z] =================================================================================================================== 00:10:06.934 [2024-11-20T11:24:12.700Z] Total : 20065.00 78.38 0.00 0.00 0.00 0.00 0.00 00:10:06.934 00:10:07.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.871 Nvme0n1 : 6.00 20129.00 78.63 0.00 0.00 0.00 0.00 0.00 00:10:07.871 [2024-11-20T11:24:13.637Z] =================================================================================================================== 00:10:07.871 [2024-11-20T11:24:13.637Z] Total : 20129.00 78.63 0.00 0.00 0.00 0.00 0.00 00:10:07.871 00:10:08.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.806 Nvme0n1 : 7.00 20174.57 78.81 0.00 0.00 0.00 0.00 0.00 00:10:08.806 [2024-11-20T11:24:14.572Z] =================================================================================================================== 00:10:08.806 [2024-11-20T11:24:14.572Z] Total : 20174.57 78.81 0.00 0.00 0.00 0.00 0.00 00:10:08.806 00:10:09.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.742 Nvme0n1 : 8.00 20216.62 78.97 0.00 0.00 0.00 0.00 0.00 00:10:09.742 [2024-11-20T11:24:15.508Z] =================================================================================================================== 00:10:09.742 [2024-11-20T11:24:15.508Z] Total : 20216.62 78.97 0.00 0.00 0.00 0.00 0.00 00:10:09.742 00:10:10.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.678 Nvme0n1 : 9.00 20248.56 79.10 0.00 0.00 0.00 0.00 0.00 00:10:10.678 [2024-11-20T11:24:16.444Z] =================================================================================================================== 00:10:10.678 [2024-11-20T11:24:16.444Z] Total : 20248.56 79.10 0.00 0.00 0.00 0.00 0.00 00:10:10.678 00:10:11.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.612 Nvme0n1 : 10.00 20274.80 79.20 0.00 0.00 0.00 0.00 0.00 00:10:11.612 [2024-11-20T11:24:17.378Z] =================================================================================================================== 00:10:11.612 [2024-11-20T11:24:17.378Z] Total : 20274.80 79.20 0.00 0.00 0.00 0.00 0.00 00:10:11.612 00:10:11.612 00:10:11.612 Latency(us) 00:10:11.612 [2024-11-20T11:24:17.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.612 Nvme0n1 : 10.01 20276.17 79.20 0.00 0.00 6307.61 4563.25 19126.80 00:10:11.612 [2024-11-20T11:24:17.378Z] =================================================================================================================== 00:10:11.612 [2024-11-20T11:24:17.378Z] Total : 20276.17 79.20 0.00 0.00 6307.61 4563.25 19126.80 00:10:11.612 { 00:10:11.612 "results": [ 00:10:11.612 { 00:10:11.612 "job": "Nvme0n1", 00:10:11.612 "core_mask": "0x2", 00:10:11.612 "workload": "randwrite", 00:10:11.612 "status": "finished", 00:10:11.612 "queue_depth": 128, 00:10:11.612 "io_size": 4096, 00:10:11.612 "runtime": 10.005635, 00:10:11.612 "iops": 20276.174375739272, 00:10:11.612 "mibps": 79.20380615523153, 00:10:11.612 "io_failed": 0, 00:10:11.613 "io_timeout": 0, 00:10:11.613 "avg_latency_us": 6307.611004247805, 00:10:11.613 "min_latency_us": 4563.247407407407, 00:10:11.613 "max_latency_us": 19126.802962962964 00:10:11.613 } 00:10:11.613 ], 00:10:11.613 "core_count": 1 00:10:11.613 } 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2716337 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2716337 ']' 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2716337 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716337 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716337' 00:10:11.613 killing process with pid 2716337 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2716337 00:10:11.613 Received shutdown signal, test time was about 10.000000 seconds 00:10:11.613 00:10:11.613 Latency(us) 00:10:11.613 [2024-11-20T11:24:17.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.613 [2024-11-20T11:24:17.379Z] =================================================================================================================== 00:10:11.613 [2024-11-20T11:24:17.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:11.613 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2716337 00:10:11.870 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:12.432 12:24:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:12.689 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:12.689 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:12.945 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:12.945 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:12.945 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:13.202 [2024-11-20 12:24:18.867014] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:13.202 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:13.202 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:13.203 12:24:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:13.460 request: 00:10:13.460 { 00:10:13.460 "uuid": "f5530bb1-c610-4ff5-92e7-a4ae64163a45", 00:10:13.460 "method": "bdev_lvol_get_lvstores", 00:10:13.460 "req_id": 1 00:10:13.460 } 00:10:13.460 Got JSON-RPC error response 00:10:13.460 response: 00:10:13.460 { 00:10:13.460 "code": -19, 00:10:13.460 "message": "No such device" 00:10:13.460 } 00:10:13.753 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:13.753 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.753 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.753 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.753 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:14.027 aio_bdev 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4824bd24-8585-4c02-8c57-d9d289bc3665 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4824bd24-8585-4c02-8c57-d9d289bc3665 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.027 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:14.288 12:24:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4824bd24-8585-4c02-8c57-d9d289bc3665 -t 2000 00:10:14.546 [ 00:10:14.546 { 00:10:14.546 "name": "4824bd24-8585-4c02-8c57-d9d289bc3665", 00:10:14.546 "aliases": [ 00:10:14.546 "lvs/lvol" 00:10:14.546 ], 00:10:14.546 "product_name": "Logical Volume", 00:10:14.546 "block_size": 4096, 00:10:14.546 "num_blocks": 38912, 00:10:14.546 "uuid": "4824bd24-8585-4c02-8c57-d9d289bc3665", 00:10:14.546 "assigned_rate_limits": { 00:10:14.546 "rw_ios_per_sec": 0, 00:10:14.546 "rw_mbytes_per_sec": 0, 00:10:14.546 "r_mbytes_per_sec": 0, 00:10:14.546 "w_mbytes_per_sec": 0 00:10:14.546 }, 00:10:14.546 "claimed": false, 00:10:14.546 "zoned": false, 00:10:14.546 "supported_io_types": { 00:10:14.546 "read": true, 00:10:14.546 "write": true, 00:10:14.546 "unmap": true, 00:10:14.546 "flush": false, 00:10:14.546 "reset": true, 00:10:14.546 "nvme_admin": false, 00:10:14.546 "nvme_io": false, 00:10:14.546 "nvme_io_md": false, 00:10:14.546 "write_zeroes": true, 00:10:14.546 "zcopy": false, 00:10:14.546 "get_zone_info": false, 00:10:14.546 "zone_management": false, 00:10:14.546 "zone_append": false, 00:10:14.546 "compare": false, 00:10:14.546 "compare_and_write": false, 00:10:14.546 "abort": false, 00:10:14.546 "seek_hole": true, 00:10:14.546 "seek_data": true, 00:10:14.546 "copy": false, 00:10:14.546 "nvme_iov_md": false 00:10:14.546 }, 00:10:14.546 "driver_specific": { 00:10:14.546 "lvol": { 00:10:14.546 "lvol_store_uuid": "f5530bb1-c610-4ff5-92e7-a4ae64163a45", 00:10:14.546 "base_bdev": "aio_bdev", 00:10:14.546 "thin_provision": false, 00:10:14.546 "num_allocated_clusters": 38, 00:10:14.546 "snapshot": false, 00:10:14.546 "clone": false, 00:10:14.546 "esnap_clone": false 00:10:14.546 } 00:10:14.546 } 00:10:14.546 } 00:10:14.546 ] 00:10:14.546 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:14.546 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:14.546 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:14.805 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:14.805 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:14.805 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:15.371 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:15.371 12:24:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4824bd24-8585-4c02-8c57-d9d289bc3665 00:10:15.630 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5530bb1-c610-4ff5-92e7-a4ae64163a45 00:10:15.888 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:16.147 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:16.405 00:10:16.405 real 0m19.245s 00:10:16.405 user 0m19.456s 00:10:16.405 sys 0m1.474s 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 ************************************ 00:10:16.405 END TEST lvs_grow_clean 00:10:16.405 ************************************ 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 ************************************ 00:10:16.405 START TEST lvs_grow_dirty 00:10:16.405 ************************************ 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:16.405 12:24:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.664 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:16.664 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:16.922 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ee711dc0-349e-48c3-a623-688c0a747038 00:10:16.922 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:16.922 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:17.489 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:17.489 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:17.490 12:24:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee711dc0-349e-48c3-a623-688c0a747038 lvol 150 00:10:17.749 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:17.749 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:17.749 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:18.007 [2024-11-20 12:24:23.632119] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:18.007 [2024-11-20 12:24:23.632197] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:18.007 true 00:10:18.007 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:18.007 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:18.264 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:18.264 12:24:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:18.831 12:24:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:19.090 12:24:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:19.348 [2024-11-20 12:24:24.964710] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:19.348 12:24:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2718197 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2718197 /var/tmp/bdevperf.sock 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2718197 ']' 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:19.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.606 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:19.865 [2024-11-20 12:24:25.375863] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:19.865 [2024-11-20 12:24:25.375973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718197 ] 00:10:19.865 [2024-11-20 12:24:25.448146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.865 [2024-11-20 12:24:25.510759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.123 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.123 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:20.123 12:24:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:20.381 Nvme0n1 00:10:20.381 12:24:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:20.640 [ 00:10:20.640 { 00:10:20.640 "name": "Nvme0n1", 00:10:20.640 "aliases": [ 00:10:20.640 "88da7394-cd93-478f-b2a8-8e29f0a8b372" 00:10:20.640 ], 00:10:20.640 "product_name": "NVMe disk", 00:10:20.640 "block_size": 4096, 00:10:20.640 "num_blocks": 38912, 00:10:20.640 "uuid": "88da7394-cd93-478f-b2a8-8e29f0a8b372", 00:10:20.640 "numa_id": 1, 00:10:20.640 "assigned_rate_limits": { 00:10:20.640 "rw_ios_per_sec": 0, 00:10:20.640 "rw_mbytes_per_sec": 0, 00:10:20.640 "r_mbytes_per_sec": 0, 00:10:20.640 "w_mbytes_per_sec": 0 00:10:20.640 }, 00:10:20.640 "claimed": false, 00:10:20.640 "zoned": false, 00:10:20.640 "supported_io_types": { 00:10:20.640 "read": true, 00:10:20.640 "write": true, 00:10:20.640 "unmap": true, 00:10:20.640 "flush": true, 00:10:20.640 "reset": true, 00:10:20.640 "nvme_admin": true, 00:10:20.640 "nvme_io": true, 00:10:20.640 "nvme_io_md": false, 00:10:20.640 "write_zeroes": true, 00:10:20.640 "zcopy": false, 00:10:20.640 "get_zone_info": false, 00:10:20.640 "zone_management": false, 00:10:20.640 "zone_append": false, 00:10:20.640 "compare": true, 00:10:20.640 "compare_and_write": true, 00:10:20.640 "abort": true, 00:10:20.640 "seek_hole": false, 00:10:20.640 "seek_data": false, 00:10:20.640 "copy": true, 00:10:20.640 "nvme_iov_md": false 00:10:20.640 }, 00:10:20.640 "memory_domains": [ 00:10:20.640 { 00:10:20.640 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:20.640 "dma_device_type": 0 00:10:20.640 } 00:10:20.640 ], 00:10:20.640 "driver_specific": { 00:10:20.640 "nvme": [ 00:10:20.640 { 00:10:20.640 "trid": { 00:10:20.640 "trtype": "RDMA", 00:10:20.640 "adrfam": "IPv4", 00:10:20.640 "traddr": "192.168.100.8", 00:10:20.640 "trsvcid": "4420", 00:10:20.640 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:20.640 }, 00:10:20.640 "ctrlr_data": { 00:10:20.640 "cntlid": 1, 00:10:20.640 "vendor_id": "0x8086", 00:10:20.640 "model_number": "SPDK bdev Controller", 00:10:20.640 "serial_number": "SPDK0", 00:10:20.640 "firmware_revision": "25.01", 00:10:20.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:20.640 "oacs": { 00:10:20.640 "security": 0, 00:10:20.640 "format": 0, 00:10:20.640 "firmware": 0, 00:10:20.640 "ns_manage": 0 00:10:20.640 }, 00:10:20.640 "multi_ctrlr": true, 00:10:20.640 "ana_reporting": false 00:10:20.640 }, 00:10:20.640 "vs": { 00:10:20.640 "nvme_version": "1.3" 00:10:20.640 }, 00:10:20.640 "ns_data": { 00:10:20.640 "id": 1, 00:10:20.640 "can_share": true 00:10:20.640 } 00:10:20.640 } 00:10:20.640 ], 00:10:20.640 "mp_policy": "active_passive" 00:10:20.640 } 00:10:20.640 } 00:10:20.640 ] 00:10:20.899 12:24:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2718298 00:10:20.899 12:24:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:20.899 12:24:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:20.899 Running I/O for 10 seconds... 00:10:21.834 Latency(us) 00:10:21.834 [2024-11-20T11:24:27.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.834 Nvme0n1 : 1.00 19521.00 76.25 0.00 0.00 0.00 0.00 0.00 00:10:21.834 [2024-11-20T11:24:27.600Z] =================================================================================================================== 00:10:21.834 [2024-11-20T11:24:27.600Z] Total : 19521.00 76.25 0.00 0.00 0.00 0.00 0.00 00:10:21.834 00:10:22.769 12:24:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:23.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.027 Nvme0n1 : 2.00 19760.50 77.19 0.00 0.00 0.00 0.00 0.00 00:10:23.027 [2024-11-20T11:24:28.793Z] =================================================================================================================== 00:10:23.027 [2024-11-20T11:24:28.793Z] Total : 19760.50 77.19 0.00 0.00 0.00 0.00 0.00 00:10:23.027 00:10:23.027 true 00:10:23.027 12:24:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:23.027 12:24:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:23.595 12:24:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:23.595 12:24:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:23.595 12:24:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2718298 00:10:23.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.853 Nvme0n1 : 3.00 19895.33 77.72 0.00 0.00 0.00 0.00 0.00 00:10:23.853 [2024-11-20T11:24:29.619Z] =================================================================================================================== 00:10:23.853 [2024-11-20T11:24:29.619Z] Total : 19895.33 77.72 0.00 0.00 0.00 0.00 0.00 00:10:23.853 00:10:24.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.787 Nvme0n1 : 4.00 20001.75 78.13 0.00 0.00 0.00 0.00 0.00 00:10:24.787 [2024-11-20T11:24:30.553Z] =================================================================================================================== 00:10:24.787 [2024-11-20T11:24:30.553Z] Total : 20001.75 78.13 0.00 0.00 0.00 0.00 0.00 00:10:24.788 00:10:26.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.163 Nvme0n1 : 5.00 20089.60 78.47 0.00 0.00 0.00 0.00 0.00 00:10:26.163 [2024-11-20T11:24:31.929Z] =================================================================================================================== 00:10:26.163 [2024-11-20T11:24:31.929Z] Total : 20089.60 78.47 0.00 0.00 0.00 0.00 0.00 00:10:26.163 00:10:27.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.098 Nvme0n1 : 6.00 20155.33 78.73 0.00 0.00 0.00 0.00 0.00 00:10:27.098 [2024-11-20T11:24:32.864Z] =================================================================================================================== 00:10:27.098 [2024-11-20T11:24:32.864Z] Total : 20155.33 78.73 0.00 0.00 0.00 0.00 0.00 00:10:27.098 00:10:28.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.063 Nvme0n1 : 7.00 20210.43 78.95 0.00 0.00 0.00 0.00 0.00 00:10:28.063 [2024-11-20T11:24:33.829Z] =================================================================================================================== 00:10:28.063 [2024-11-20T11:24:33.829Z] Total : 20210.43 78.95 0.00 0.00 0.00 0.00 0.00 00:10:28.063 00:10:28.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.998 Nvme0n1 : 8.00 20251.88 79.11 0.00 0.00 0.00 0.00 0.00 00:10:28.998 [2024-11-20T11:24:34.764Z] =================================================================================================================== 00:10:28.998 [2024-11-20T11:24:34.764Z] Total : 20251.88 79.11 0.00 0.00 0.00 0.00 0.00 00:10:28.998 00:10:29.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.932 Nvme0n1 : 9.00 20284.67 79.24 0.00 0.00 0.00 0.00 0.00 00:10:29.932 [2024-11-20T11:24:35.698Z] =================================================================================================================== 00:10:29.932 [2024-11-20T11:24:35.698Z] Total : 20284.67 79.24 0.00 0.00 0.00 0.00 0.00 00:10:29.932 00:10:30.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.867 Nvme0n1 : 10.00 20304.80 79.32 0.00 0.00 0.00 0.00 0.00 00:10:30.867 [2024-11-20T11:24:36.633Z] =================================================================================================================== 00:10:30.867 [2024-11-20T11:24:36.633Z] Total : 20304.80 79.32 0.00 0.00 0.00 0.00 0.00 00:10:30.867 00:10:30.867 00:10:30.867 Latency(us) 00:10:30.867 [2024-11-20T11:24:36.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.867 Nvme0n1 : 10.01 20305.47 79.32 0.00 0.00 6298.76 4684.61 14951.92 00:10:30.867 [2024-11-20T11:24:36.633Z] =================================================================================================================== 00:10:30.867 [2024-11-20T11:24:36.633Z] Total : 20305.47 79.32 0.00 0.00 6298.76 4684.61 14951.92 00:10:30.867 { 00:10:30.867 "results": [ 00:10:30.867 { 00:10:30.867 "job": "Nvme0n1", 00:10:30.867 "core_mask": "0x2", 00:10:30.867 "workload": "randwrite", 00:10:30.867 "status": "finished", 00:10:30.867 "queue_depth": 128, 00:10:30.867 "io_size": 4096, 00:10:30.867 "runtime": 10.005382, 00:10:30.867 "iops": 20305.47159518747, 00:10:30.867 "mibps": 79.31824841870106, 00:10:30.867 "io_failed": 0, 00:10:30.867 "io_timeout": 0, 00:10:30.867 "avg_latency_us": 6298.761791831011, 00:10:30.867 "min_latency_us": 4684.61037037037, 00:10:30.867 "max_latency_us": 14951.917037037038 00:10:30.867 } 00:10:30.867 ], 00:10:30.867 "core_count": 1 00:10:30.867 } 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2718197 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2718197 ']' 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2718197 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718197 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718197' 00:10:30.867 killing process with pid 2718197 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2718197 00:10:30.867 Received shutdown signal, test time was about 10.000000 seconds 00:10:30.867 00:10:30.867 Latency(us) 00:10:30.867 [2024-11-20T11:24:36.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.867 [2024-11-20T11:24:36.633Z] =================================================================================================================== 00:10:30.867 [2024-11-20T11:24:36.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:30.867 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2718197 00:10:31.127 12:24:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:31.694 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:31.952 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:31.952 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2715724 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2715724 00:10:32.211 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2715724 Killed "${NVMF_APP[@]}" "$@" 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2719372 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2719372 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2719372 ']' 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.211 12:24:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:32.211 [2024-11-20 12:24:37.926765] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:32.211 [2024-11-20 12:24:37.926865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.469 [2024-11-20 12:24:38.002245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.469 [2024-11-20 12:24:38.064608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.469 [2024-11-20 12:24:38.064672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.469 [2024-11-20 12:24:38.064688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.469 [2024-11-20 12:24:38.064701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.470 [2024-11-20 12:24:38.064712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.470 [2024-11-20 12:24:38.065230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.470 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:33.036 [2024-11-20 12:24:38.546855] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:33.036 [2024-11-20 12:24:38.546995] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:33.036 [2024-11-20 12:24:38.547051] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.036 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:33.294 12:24:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 88da7394-cd93-478f-b2a8-8e29f0a8b372 -t 2000 00:10:33.558 [ 00:10:33.558 { 00:10:33.558 "name": "88da7394-cd93-478f-b2a8-8e29f0a8b372", 00:10:33.558 "aliases": [ 00:10:33.558 "lvs/lvol" 00:10:33.558 ], 00:10:33.558 "product_name": "Logical Volume", 00:10:33.558 "block_size": 4096, 00:10:33.558 "num_blocks": 38912, 00:10:33.558 "uuid": "88da7394-cd93-478f-b2a8-8e29f0a8b372", 00:10:33.558 "assigned_rate_limits": { 00:10:33.558 "rw_ios_per_sec": 0, 00:10:33.558 "rw_mbytes_per_sec": 0, 00:10:33.558 "r_mbytes_per_sec": 0, 00:10:33.558 "w_mbytes_per_sec": 0 00:10:33.558 }, 00:10:33.558 "claimed": false, 00:10:33.558 "zoned": false, 00:10:33.558 "supported_io_types": { 00:10:33.559 "read": true, 00:10:33.559 "write": true, 00:10:33.559 "unmap": true, 00:10:33.559 "flush": false, 00:10:33.559 "reset": true, 00:10:33.559 "nvme_admin": false, 00:10:33.559 "nvme_io": false, 00:10:33.559 "nvme_io_md": false, 00:10:33.559 "write_zeroes": true, 00:10:33.559 "zcopy": false, 00:10:33.559 "get_zone_info": false, 00:10:33.559 "zone_management": false, 00:10:33.559 "zone_append": false, 00:10:33.559 "compare": false, 00:10:33.559 "compare_and_write": false, 00:10:33.559 "abort": false, 00:10:33.559 "seek_hole": true, 00:10:33.559 "seek_data": true, 00:10:33.559 "copy": false, 00:10:33.559 "nvme_iov_md": false 00:10:33.559 }, 00:10:33.559 "driver_specific": { 00:10:33.559 "lvol": { 00:10:33.559 "lvol_store_uuid": "ee711dc0-349e-48c3-a623-688c0a747038", 00:10:33.559 "base_bdev": "aio_bdev", 00:10:33.559 "thin_provision": false, 00:10:33.559 "num_allocated_clusters": 38, 00:10:33.559 "snapshot": false, 00:10:33.559 "clone": false, 00:10:33.559 "esnap_clone": false 00:10:33.559 } 00:10:33.559 } 00:10:33.559 } 00:10:33.559 ] 00:10:33.559 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:33.559 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:33.559 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:33.817 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:33.817 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:33.817 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:34.384 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:34.384 12:24:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:34.643 [2024-11-20 12:24:40.217100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:34.643 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:34.902 request: 00:10:34.902 { 00:10:34.902 "uuid": "ee711dc0-349e-48c3-a623-688c0a747038", 00:10:34.902 "method": "bdev_lvol_get_lvstores", 00:10:34.902 "req_id": 1 00:10:34.902 } 00:10:34.902 Got JSON-RPC error response 00:10:34.902 response: 00:10:34.902 { 00:10:34.902 "code": -19, 00:10:34.902 "message": "No such device" 00:10:34.902 } 00:10:34.902 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:34.902 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:34.902 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:34.902 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:34.902 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:35.160 aio_bdev 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.418 12:24:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:35.677 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 88da7394-cd93-478f-b2a8-8e29f0a8b372 -t 2000 00:10:35.935 [ 00:10:35.935 { 00:10:35.935 "name": "88da7394-cd93-478f-b2a8-8e29f0a8b372", 00:10:35.935 "aliases": [ 00:10:35.935 "lvs/lvol" 00:10:35.935 ], 00:10:35.935 "product_name": "Logical Volume", 00:10:35.935 "block_size": 4096, 00:10:35.935 "num_blocks": 38912, 00:10:35.935 "uuid": "88da7394-cd93-478f-b2a8-8e29f0a8b372", 00:10:35.935 "assigned_rate_limits": { 00:10:35.935 "rw_ios_per_sec": 0, 00:10:35.935 "rw_mbytes_per_sec": 0, 00:10:35.935 "r_mbytes_per_sec": 0, 00:10:35.935 "w_mbytes_per_sec": 0 00:10:35.935 }, 00:10:35.935 "claimed": false, 00:10:35.935 "zoned": false, 00:10:35.935 "supported_io_types": { 00:10:35.935 "read": true, 00:10:35.935 "write": true, 00:10:35.935 "unmap": true, 00:10:35.935 "flush": false, 00:10:35.935 "reset": true, 00:10:35.935 "nvme_admin": false, 00:10:35.935 "nvme_io": false, 00:10:35.935 "nvme_io_md": false, 00:10:35.935 "write_zeroes": true, 00:10:35.935 "zcopy": false, 00:10:35.935 "get_zone_info": false, 00:10:35.935 "zone_management": false, 00:10:35.935 "zone_append": false, 00:10:35.935 "compare": false, 00:10:35.935 "compare_and_write": false, 00:10:35.935 "abort": false, 00:10:35.935 "seek_hole": true, 00:10:35.935 "seek_data": true, 00:10:35.935 "copy": false, 00:10:35.935 "nvme_iov_md": false 00:10:35.935 }, 00:10:35.935 "driver_specific": { 00:10:35.935 "lvol": { 00:10:35.935 "lvol_store_uuid": "ee711dc0-349e-48c3-a623-688c0a747038", 00:10:35.935 "base_bdev": "aio_bdev", 00:10:35.935 "thin_provision": false, 00:10:35.935 "num_allocated_clusters": 38, 00:10:35.935 "snapshot": false, 00:10:35.935 "clone": false, 00:10:35.935 "esnap_clone": false 00:10:35.935 } 00:10:35.935 } 00:10:35.935 } 00:10:35.935 ] 00:10:35.935 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:35.935 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:35.935 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:36.194 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:36.194 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:36.194 12:24:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:36.760 12:24:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:36.760 12:24:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 88da7394-cd93-478f-b2a8-8e29f0a8b372 00:10:37.019 12:24:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee711dc0-349e-48c3-a623-688c0a747038 00:10:37.276 12:24:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:37.535 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.794 00:10:37.794 real 0m21.359s 00:10:37.794 user 0m54.805s 00:10:37.794 sys 0m3.755s 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:37.794 ************************************ 00:10:37.794 END TEST lvs_grow_dirty 00:10:37.794 ************************************ 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:37.794 nvmf_trace.0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:37.794 rmmod nvme_rdma 00:10:37.794 rmmod nvme_fabrics 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2719372 ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2719372 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2719372 ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2719372 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2719372 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2719372' 00:10:37.794 killing process with pid 2719372 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2719372 00:10:37.794 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2719372 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:38.053 00:10:38.053 real 0m44.115s 00:10:38.053 user 1m21.314s 00:10:38.053 sys 0m7.196s 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.053 ************************************ 00:10:38.053 END TEST nvmf_lvs_grow 00:10:38.053 ************************************ 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.053 12:24:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.054 12:24:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.054 ************************************ 00:10:38.054 START TEST nvmf_bdev_io_wait 00:10:38.054 ************************************ 00:10:38.054 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:38.054 * Looking for test storage... 00:10:38.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.054 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.054 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.054 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.314 --rc genhtml_branch_coverage=1 00:10:38.314 --rc genhtml_function_coverage=1 00:10:38.314 --rc genhtml_legend=1 00:10:38.314 --rc geninfo_all_blocks=1 00:10:38.314 --rc geninfo_unexecuted_blocks=1 00:10:38.314 00:10:38.314 ' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.314 --rc genhtml_branch_coverage=1 00:10:38.314 --rc genhtml_function_coverage=1 00:10:38.314 --rc genhtml_legend=1 00:10:38.314 --rc geninfo_all_blocks=1 00:10:38.314 --rc geninfo_unexecuted_blocks=1 00:10:38.314 00:10:38.314 ' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.314 --rc genhtml_branch_coverage=1 00:10:38.314 --rc genhtml_function_coverage=1 00:10:38.314 --rc genhtml_legend=1 00:10:38.314 --rc geninfo_all_blocks=1 00:10:38.314 --rc geninfo_unexecuted_blocks=1 00:10:38.314 00:10:38.314 ' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.314 --rc genhtml_branch_coverage=1 00:10:38.314 --rc genhtml_function_coverage=1 00:10:38.314 --rc genhtml_legend=1 00:10:38.314 --rc geninfo_all_blocks=1 00:10:38.314 --rc geninfo_unexecuted_blocks=1 00:10:38.314 00:10:38.314 ' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.314 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.315 12:24:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.315 12:24:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.315 12:24:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.315 12:24:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.315 12:24:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.850 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:10:40.851 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:10:40.851 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:10:40.851 Found net devices under 0000:83:00.0: mlx_0_0 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:10:40.851 Found net devices under 0000:83:00.1: mlx_0_1 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:40.851 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:40.852 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:40.852 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:10:40.852 altname enp131s0f0np0 00:10:40.852 inet 192.168.100.8/24 scope global mlx_0_0 00:10:40.852 valid_lft forever preferred_lft forever 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:40.852 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:40.852 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:10:40.852 altname enp131s0f1np1 00:10:40.852 inet 192.168.100.9/24 scope global mlx_0_1 00:10:40.852 valid_lft forever preferred_lft forever 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:40.852 192.168.100.9' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:40.852 192.168.100.9' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:40.852 192.168.100.9' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2721301 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2721301 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2721301 ']' 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.852 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:40.852 [2024-11-20 12:24:46.453820] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:40.852 [2024-11-20 12:24:46.453911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.852 [2024-11-20 12:24:46.524726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.852 [2024-11-20 12:24:46.588984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.852 [2024-11-20 12:24:46.589044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.852 [2024-11-20 12:24:46.589066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.852 [2024-11-20 12:24:46.589081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.852 [2024-11-20 12:24:46.589092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.852 [2024-11-20 12:24:46.590383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.852 [2024-11-20 12:24:46.590461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.852 [2024-11-20 12:24:46.590513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.852 [2024-11-20 12:24:46.590518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.111 12:24:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 [2024-11-20 12:24:46.875812] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1086df0/0x108b2e0) succeed. 00:10:41.369 [2024-11-20 12:24:46.891946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1088480/0x10cc980) succeed. 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 Malloc0 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 [2024-11-20 12:24:47.109996] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2721331 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2721333 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2721335 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:41.369 { 00:10:41.369 "params": { 00:10:41.369 "name": "Nvme$subsystem", 00:10:41.369 "trtype": "$TEST_TRANSPORT", 00:10:41.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.369 "adrfam": "ipv4", 00:10:41.369 "trsvcid": "$NVMF_PORT", 00:10:41.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.369 "hdgst": ${hdgst:-false}, 00:10:41.369 "ddgst": ${ddgst:-false} 00:10:41.369 }, 00:10:41.369 "method": "bdev_nvme_attach_controller" 00:10:41.369 } 00:10:41.369 EOF 00:10:41.369 )") 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2721337 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:41.369 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:41.369 { 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme$subsystem", 00:10:41.370 "trtype": "$TEST_TRANSPORT", 00:10:41.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "$NVMF_PORT", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.370 "hdgst": ${hdgst:-false}, 00:10:41.370 "ddgst": ${ddgst:-false} 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 } 00:10:41.370 EOF 00:10:41.370 )") 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:41.370 { 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme$subsystem", 00:10:41.370 "trtype": "$TEST_TRANSPORT", 00:10:41.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "$NVMF_PORT", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.370 "hdgst": ${hdgst:-false}, 00:10:41.370 "ddgst": ${ddgst:-false} 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 } 00:10:41.370 EOF 00:10:41.370 )") 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:41.370 { 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme$subsystem", 00:10:41.370 "trtype": "$TEST_TRANSPORT", 00:10:41.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "$NVMF_PORT", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.370 "hdgst": ${hdgst:-false}, 00:10:41.370 "ddgst": ${ddgst:-false} 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 } 00:10:41.370 EOF 00:10:41.370 )") 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2721331 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme1", 00:10:41.370 "trtype": "rdma", 00:10:41.370 "traddr": "192.168.100.8", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "4420", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.370 "hdgst": false, 00:10:41.370 "ddgst": false 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 }' 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme1", 00:10:41.370 "trtype": "rdma", 00:10:41.370 "traddr": "192.168.100.8", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "4420", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.370 "hdgst": false, 00:10:41.370 "ddgst": false 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 }' 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme1", 00:10:41.370 "trtype": "rdma", 00:10:41.370 "traddr": "192.168.100.8", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "4420", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.370 "hdgst": false, 00:10:41.370 "ddgst": false 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 }' 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:41.370 12:24:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:41.370 "params": { 00:10:41.370 "name": "Nvme1", 00:10:41.370 "trtype": "rdma", 00:10:41.370 "traddr": "192.168.100.8", 00:10:41.370 "adrfam": "ipv4", 00:10:41.370 "trsvcid": "4420", 00:10:41.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.370 "hdgst": false, 00:10:41.370 "ddgst": false 00:10:41.370 }, 00:10:41.370 "method": "bdev_nvme_attach_controller" 00:10:41.370 }' 00:10:41.628 [2024-11-20 12:24:47.163258] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:41.628 [2024-11-20 12:24:47.163287] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:41.628 [2024-11-20 12:24:47.163288] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:41.628 [2024-11-20 12:24:47.163287] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:41.628 [2024-11-20 12:24:47.163344] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:41.628 [2024-11-20 12:24:47.163383] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 12:24:47.163384] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 12:24:47.163385] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:41.628 --proc-type=auto ] 00:10:41.628 --proc-type=auto ] 00:10:41.628 [2024-11-20 12:24:47.321920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.628 [2024-11-20 12:24:47.374908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.628 [2024-11-20 12:24:47.392178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.891 [2024-11-20 12:24:47.445744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:41.891 [2024-11-20 12:24:47.463545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.891 [2024-11-20 12:24:47.516677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.891 [2024-11-20 12:24:47.524886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.891 [2024-11-20 12:24:47.578461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:41.891 Running I/O for 1 seconds... 00:10:42.157 Running I/O for 1 seconds... 00:10:42.157 Running I/O for 1 seconds... 00:10:42.157 Running I/O for 1 seconds... 00:10:43.112 14576.00 IOPS, 56.94 MiB/s 00:10:43.112 Latency(us) 00:10:43.112 [2024-11-20T11:24:48.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.112 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:43.112 Nvme1n1 : 1.01 14611.11 57.07 0.00 0.00 8725.02 5509.88 15534.46 00:10:43.112 [2024-11-20T11:24:48.878Z] =================================================================================================================== 00:10:43.112 [2024-11-20T11:24:48.878Z] Total : 14611.11 57.07 0.00 0.00 8725.02 5509.88 15534.46 00:10:43.112 158312.00 IOPS, 618.41 MiB/s 00:10:43.112 Latency(us) 00:10:43.112 [2024-11-20T11:24:48.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.112 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:43.112 Nvme1n1 : 1.00 157974.04 617.09 0.00 0.00 806.62 342.85 2803.48 00:10:43.112 [2024-11-20T11:24:48.878Z] =================================================================================================================== 00:10:43.112 [2024-11-20T11:24:48.878Z] Total : 157974.04 617.09 0.00 0.00 806.62 342.85 2803.48 00:10:43.112 11677.00 IOPS, 45.61 MiB/s 00:10:43.112 Latency(us) 00:10:43.112 [2024-11-20T11:24:48.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.112 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:43.112 Nvme1n1 : 1.01 11726.57 45.81 0.00 0.00 10869.21 6116.69 26020.22 00:10:43.112 [2024-11-20T11:24:48.878Z] =================================================================================================================== 00:10:43.112 [2024-11-20T11:24:48.878Z] Total : 11726.57 45.81 0.00 0.00 10869.21 6116.69 26020.22 00:10:43.112 15219.00 IOPS, 59.45 MiB/s 00:10:43.112 Latency(us) 00:10:43.112 [2024-11-20T11:24:48.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.112 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:43.112 Nvme1n1 : 1.01 15290.50 59.73 0.00 0.00 8342.95 4247.70 18058.81 00:10:43.112 [2024-11-20T11:24:48.878Z] =================================================================================================================== 00:10:43.112 [2024-11-20T11:24:48.879Z] Total : 15290.50 59.73 0.00 0.00 8342.95 4247.70 18058.81 00:10:43.113 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2721333 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2721335 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2721337 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:43.374 rmmod nvme_rdma 00:10:43.374 rmmod nvme_fabrics 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2721301 ']' 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2721301 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2721301 ']' 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2721301 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.374 12:24:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2721301 00:10:43.374 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.374 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.374 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2721301' 00:10:43.374 killing process with pid 2721301 00:10:43.374 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2721301 00:10:43.374 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2721301 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:43.638 00:10:43.638 real 0m5.594s 00:10:43.638 user 0m17.067s 00:10:43.638 sys 0m2.942s 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 ************************************ 00:10:43.638 END TEST nvmf_bdev_io_wait 00:10:43.638 ************************************ 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 ************************************ 00:10:43.638 START TEST nvmf_queue_depth 00:10:43.638 ************************************ 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:43.638 * Looking for test storage... 00:10:43.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.638 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.963 --rc genhtml_branch_coverage=1 00:10:43.963 --rc genhtml_function_coverage=1 00:10:43.963 --rc genhtml_legend=1 00:10:43.963 --rc geninfo_all_blocks=1 00:10:43.963 --rc geninfo_unexecuted_blocks=1 00:10:43.963 00:10:43.963 ' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.963 --rc genhtml_branch_coverage=1 00:10:43.963 --rc genhtml_function_coverage=1 00:10:43.963 --rc genhtml_legend=1 00:10:43.963 --rc geninfo_all_blocks=1 00:10:43.963 --rc geninfo_unexecuted_blocks=1 00:10:43.963 00:10:43.963 ' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.963 --rc genhtml_branch_coverage=1 00:10:43.963 --rc genhtml_function_coverage=1 00:10:43.963 --rc genhtml_legend=1 00:10:43.963 --rc geninfo_all_blocks=1 00:10:43.963 --rc geninfo_unexecuted_blocks=1 00:10:43.963 00:10:43.963 ' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.963 --rc genhtml_branch_coverage=1 00:10:43.963 --rc genhtml_function_coverage=1 00:10:43.963 --rc genhtml_legend=1 00:10:43.963 --rc geninfo_all_blocks=1 00:10:43.963 --rc geninfo_unexecuted_blocks=1 00:10:43.963 00:10:43.963 ' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.963 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.964 12:24:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:10:46.561 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:10:46.561 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:46.561 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:10:46.562 Found net devices under 0000:83:00.0: mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:10:46.562 Found net devices under 0000:83:00.1: mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:46.562 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.562 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:10:46.562 altname enp131s0f0np0 00:10:46.562 inet 192.168.100.8/24 scope global mlx_0_0 00:10:46.562 valid_lft forever preferred_lft forever 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:46.562 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.562 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:10:46.562 altname enp131s0f1np1 00:10:46.562 inet 192.168.100.9/24 scope global mlx_0_1 00:10:46.562 valid_lft forever preferred_lft forever 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:46.562 192.168.100.9' 00:10:46.562 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:46.562 192.168.100.9' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:46.563 192.168.100.9' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.563 12:24:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2722934 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2722934 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2722934 ']' 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.563 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.563 [2024-11-20 12:24:52.069443] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:46.563 [2024-11-20 12:24:52.069568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.563 [2024-11-20 12:24:52.146394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.563 [2024-11-20 12:24:52.209333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.563 [2024-11-20 12:24:52.209397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.563 [2024-11-20 12:24:52.209413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.563 [2024-11-20 12:24:52.209426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.563 [2024-11-20 12:24:52.209437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.563 [2024-11-20 12:24:52.209959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.824 [2024-11-20 12:24:52.423548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed4cb0/0x1ed91a0) succeed. 00:10:46.824 [2024-11-20 12:24:52.451236] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed6160/0x1f1a840) succeed. 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.824 Malloc0 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.824 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.084 [2024-11-20 12:24:52.601563] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2722959 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2722959 /var/tmp/bdevperf.sock 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2722959 ']' 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:47.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.084 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.084 [2024-11-20 12:24:52.660035] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:47.084 [2024-11-20 12:24:52.660136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722959 ] 00:10:47.084 [2024-11-20 12:24:52.733270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.084 [2024-11-20 12:24:52.795956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.343 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.343 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:47.343 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:47.343 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.343 12:24:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.343 NVMe0n1 00:10:47.343 12:24:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.343 12:24:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:47.602 Running I/O for 10 seconds... 00:10:49.484 10240.00 IOPS, 40.00 MiB/s [2024-11-20T11:24:56.190Z] 10752.00 IOPS, 42.00 MiB/s [2024-11-20T11:24:57.580Z] 10922.67 IOPS, 42.67 MiB/s [2024-11-20T11:24:58.529Z] 11008.00 IOPS, 43.00 MiB/s [2024-11-20T11:24:59.469Z] 11059.20 IOPS, 43.20 MiB/s [2024-11-20T11:25:00.411Z] 11083.83 IOPS, 43.30 MiB/s [2024-11-20T11:25:01.353Z] 11081.71 IOPS, 43.29 MiB/s [2024-11-20T11:25:02.297Z] 11087.50 IOPS, 43.31 MiB/s [2024-11-20T11:25:03.239Z] 11083.78 IOPS, 43.30 MiB/s [2024-11-20T11:25:03.501Z] 11090.50 IOPS, 43.32 MiB/s 00:10:57.735 Latency(us) 00:10:57.735 [2024-11-20T11:25:03.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.735 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:57.735 Verification LBA range: start 0x0 length 0x4000 00:10:57.735 NVMe0n1 : 10.05 11122.53 43.45 0.00 0.00 91670.05 5437.06 57477.50 00:10:57.735 [2024-11-20T11:25:03.501Z] =================================================================================================================== 00:10:57.735 [2024-11-20T11:25:03.501Z] Total : 11122.53 43.45 0.00 0.00 91670.05 5437.06 57477.50 00:10:57.735 { 00:10:57.735 "results": [ 00:10:57.735 { 00:10:57.735 "job": "NVMe0n1", 00:10:57.735 "core_mask": "0x1", 00:10:57.735 "workload": "verify", 00:10:57.735 "status": "finished", 00:10:57.735 "verify_range": { 00:10:57.736 "start": 0, 00:10:57.736 "length": 16384 00:10:57.736 }, 00:10:57.736 "queue_depth": 1024, 00:10:57.736 "io_size": 4096, 00:10:57.736 "runtime": 10.048613, 00:10:57.736 "iops": 11122.53004469373, 00:10:57.736 "mibps": 43.447382987084886, 00:10:57.736 "io_failed": 0, 00:10:57.736 "io_timeout": 0, 00:10:57.736 "avg_latency_us": 91670.0516289788, 00:10:57.736 "min_latency_us": 5437.060740740741, 00:10:57.736 "max_latency_us": 57477.49925925926 00:10:57.736 } 00:10:57.736 ], 00:10:57.736 "core_count": 1 00:10:57.736 } 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2722959 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2722959 ']' 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2722959 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722959 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722959' 00:10:57.736 killing process with pid 2722959 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2722959 00:10:57.736 Received shutdown signal, test time was about 10.000000 seconds 00:10:57.736 00:10:57.736 Latency(us) 00:10:57.736 [2024-11-20T11:25:03.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.736 [2024-11-20T11:25:03.502Z] =================================================================================================================== 00:10:57.736 [2024-11-20T11:25:03.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:57.736 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2722959 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:57.997 rmmod nvme_rdma 00:10:57.997 rmmod nvme_fabrics 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2722934 ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2722934 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2722934 ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2722934 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722934 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722934' 00:10:57.997 killing process with pid 2722934 00:10:57.997 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2722934 00:10:57.998 12:25:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2722934 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:58.568 00:10:58.568 real 0m14.735s 00:10:58.568 user 0m23.959s 00:10:58.568 sys 0m2.483s 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.568 ************************************ 00:10:58.568 END TEST nvmf_queue_depth 00:10:58.568 ************************************ 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.568 ************************************ 00:10:58.568 START TEST nvmf_target_multipath 00:10:58.568 ************************************ 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:58.568 * Looking for test storage... 00:10:58.568 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:58.568 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:58.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.569 --rc genhtml_branch_coverage=1 00:10:58.569 --rc genhtml_function_coverage=1 00:10:58.569 --rc genhtml_legend=1 00:10:58.569 --rc geninfo_all_blocks=1 00:10:58.569 --rc geninfo_unexecuted_blocks=1 00:10:58.569 00:10:58.569 ' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:58.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.569 --rc genhtml_branch_coverage=1 00:10:58.569 --rc genhtml_function_coverage=1 00:10:58.569 --rc genhtml_legend=1 00:10:58.569 --rc geninfo_all_blocks=1 00:10:58.569 --rc geninfo_unexecuted_blocks=1 00:10:58.569 00:10:58.569 ' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:58.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.569 --rc genhtml_branch_coverage=1 00:10:58.569 --rc genhtml_function_coverage=1 00:10:58.569 --rc genhtml_legend=1 00:10:58.569 --rc geninfo_all_blocks=1 00:10:58.569 --rc geninfo_unexecuted_blocks=1 00:10:58.569 00:10:58.569 ' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:58.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.569 --rc genhtml_branch_coverage=1 00:10:58.569 --rc genhtml_function_coverage=1 00:10:58.569 --rc genhtml_legend=1 00:10:58.569 --rc geninfo_all_blocks=1 00:10:58.569 --rc geninfo_unexecuted_blocks=1 00:10:58.569 00:10:58.569 ' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.569 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.569 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.570 12:25:04 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:01.115 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:01.115 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.115 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:01.116 Found net devices under 0000:83:00.0: mlx_0_0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:01.116 Found net devices under 0000:83:00.1: mlx_0_1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:01.116 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.116 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:01.116 altname enp131s0f0np0 00:11:01.116 inet 192.168.100.8/24 scope global mlx_0_0 00:11:01.116 valid_lft forever preferred_lft forever 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:01.116 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.116 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:01.116 altname enp131s0f1np1 00:11:01.116 inet 192.168.100.9/24 scope global mlx_0_1 00:11:01.116 valid_lft forever preferred_lft forever 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.116 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:01.117 192.168.100.9' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:01.117 192.168.100.9' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:01.117 192.168.100.9' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:01.117 run this test only with TCP transport for now 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:01.117 rmmod nvme_rdma 00:11:01.117 rmmod nvme_fabrics 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:01.117 00:11:01.117 real 0m2.707s 00:11:01.117 user 0m0.867s 00:11:01.117 sys 0m1.910s 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:01.117 ************************************ 00:11:01.117 END TEST nvmf_target_multipath 00:11:01.117 ************************************ 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.117 ************************************ 00:11:01.117 START TEST nvmf_zcopy 00:11:01.117 ************************************ 00:11:01.117 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:01.378 * Looking for test storage... 00:11:01.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:01.378 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.378 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.378 12:25:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.378 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.379 --rc genhtml_branch_coverage=1 00:11:01.379 --rc genhtml_function_coverage=1 00:11:01.379 --rc genhtml_legend=1 00:11:01.379 --rc geninfo_all_blocks=1 00:11:01.379 --rc geninfo_unexecuted_blocks=1 00:11:01.379 00:11:01.379 ' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.379 --rc genhtml_branch_coverage=1 00:11:01.379 --rc genhtml_function_coverage=1 00:11:01.379 --rc genhtml_legend=1 00:11:01.379 --rc geninfo_all_blocks=1 00:11:01.379 --rc geninfo_unexecuted_blocks=1 00:11:01.379 00:11:01.379 ' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.379 --rc genhtml_branch_coverage=1 00:11:01.379 --rc genhtml_function_coverage=1 00:11:01.379 --rc genhtml_legend=1 00:11:01.379 --rc geninfo_all_blocks=1 00:11:01.379 --rc geninfo_unexecuted_blocks=1 00:11:01.379 00:11:01.379 ' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.379 --rc genhtml_branch_coverage=1 00:11:01.379 --rc genhtml_function_coverage=1 00:11:01.379 --rc genhtml_legend=1 00:11:01.379 --rc geninfo_all_blocks=1 00:11:01.379 --rc geninfo_unexecuted_blocks=1 00:11:01.379 00:11:01.379 ' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.379 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.379 12:25:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:03.919 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:03.919 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:03.919 Found net devices under 0000:83:00.0: mlx_0_0 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:03.919 Found net devices under 0000:83:00.1: mlx_0_1 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:03.919 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:03.920 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.920 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:03.920 altname enp131s0f0np0 00:11:03.920 inet 192.168.100.8/24 scope global mlx_0_0 00:11:03.920 valid_lft forever preferred_lft forever 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:03.920 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:03.920 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:03.920 altname enp131s0f1np1 00:11:03.920 inet 192.168.100.9/24 scope global mlx_0_1 00:11:03.920 valid_lft forever preferred_lft forever 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:03.920 192.168.100.9' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:03.920 192.168.100.9' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:03.920 192.168.100.9' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2726599 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2726599 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2726599 ']' 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.920 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.921 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.921 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.921 12:25:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.921 [2024-11-20 12:25:09.547774] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:03.921 [2024-11-20 12:25:09.547946] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.921 [2024-11-20 12:25:09.681645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.181 [2024-11-20 12:25:09.785251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.181 [2024-11-20 12:25:09.785370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.181 [2024-11-20 12:25:09.785406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.181 [2024-11-20 12:25:09.785435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.181 [2024-11-20 12:25:09.785468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.181 [2024-11-20 12:25:09.786464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:05.122 Unsupported transport: rdma 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:05.122 nvmf_trace.0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:05.122 rmmod nvme_rdma 00:11:05.122 rmmod nvme_fabrics 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2726599 ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2726599 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2726599 ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2726599 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726599 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726599' 00:11:05.122 killing process with pid 2726599 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2726599 00:11:05.122 12:25:10 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2726599 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:05.383 00:11:05.383 real 0m4.236s 00:11:05.383 user 0m2.898s 00:11:05.383 sys 0m2.199s 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.383 ************************************ 00:11:05.383 END TEST nvmf_zcopy 00:11:05.383 ************************************ 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.383 ************************************ 00:11:05.383 START TEST nvmf_nmic 00:11:05.383 ************************************ 00:11:05.383 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:05.643 * Looking for test storage... 00:11:05.643 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.643 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.644 --rc genhtml_branch_coverage=1 00:11:05.644 --rc genhtml_function_coverage=1 00:11:05.644 --rc genhtml_legend=1 00:11:05.644 --rc geninfo_all_blocks=1 00:11:05.644 --rc geninfo_unexecuted_blocks=1 00:11:05.644 00:11:05.644 ' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.644 --rc genhtml_branch_coverage=1 00:11:05.644 --rc genhtml_function_coverage=1 00:11:05.644 --rc genhtml_legend=1 00:11:05.644 --rc geninfo_all_blocks=1 00:11:05.644 --rc geninfo_unexecuted_blocks=1 00:11:05.644 00:11:05.644 ' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.644 --rc genhtml_branch_coverage=1 00:11:05.644 --rc genhtml_function_coverage=1 00:11:05.644 --rc genhtml_legend=1 00:11:05.644 --rc geninfo_all_blocks=1 00:11:05.644 --rc geninfo_unexecuted_blocks=1 00:11:05.644 00:11:05.644 ' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.644 --rc genhtml_branch_coverage=1 00:11:05.644 --rc genhtml_function_coverage=1 00:11:05.644 --rc genhtml_legend=1 00:11:05.644 --rc geninfo_all_blocks=1 00:11:05.644 --rc geninfo_unexecuted_blocks=1 00:11:05.644 00:11:05.644 ' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.644 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.644 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.645 12:25:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:08.185 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:08.186 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:08.186 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:08.186 Found net devices under 0000:83:00.0: mlx_0_0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:08.186 Found net devices under 0000:83:00.1: mlx_0_1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:08.186 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.186 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:08.186 altname enp131s0f0np0 00:11:08.186 inet 192.168.100.8/24 scope global mlx_0_0 00:11:08.186 valid_lft forever preferred_lft forever 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:08.186 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.186 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:08.186 altname enp131s0f1np1 00:11:08.186 inet 192.168.100.9/24 scope global mlx_0_1 00:11:08.186 valid_lft forever preferred_lft forever 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.186 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.187 192.168.100.9' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:08.187 192.168.100.9' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:08.187 192.168.100.9' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2728099 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2728099 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2728099 ']' 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.187 12:25:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.187 [2024-11-20 12:25:13.734271] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:08.187 [2024-11-20 12:25:13.734377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.187 [2024-11-20 12:25:13.806520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.187 [2024-11-20 12:25:13.872965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.187 [2024-11-20 12:25:13.873029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.187 [2024-11-20 12:25:13.873045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.187 [2024-11-20 12:25:13.873058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.187 [2024-11-20 12:25:13.873069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.187 [2024-11-20 12:25:13.874397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.187 [2024-11-20 12:25:13.874454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.187 [2024-11-20 12:25:13.874507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.187 [2024-11-20 12:25:13.874511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.446 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.446 [2024-11-20 12:25:14.080690] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x178adf0/0x178f2e0) succeed. 00:11:08.446 [2024-11-20 12:25:14.096551] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x178c480/0x17d0980) succeed. 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 Malloc0 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 [2024-11-20 12:25:14.314994] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:08.705 test case1: single bdev can't be used in multiple subsystems 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 [2024-11-20 12:25:14.338766] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:08.705 [2024-11-20 12:25:14.338799] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:08.705 [2024-11-20 12:25:14.338815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.705 request: 00:11:08.705 { 00:11:08.705 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:08.705 "namespace": { 00:11:08.705 "bdev_name": "Malloc0", 00:11:08.705 "no_auto_visible": false 00:11:08.705 }, 00:11:08.705 "method": "nvmf_subsystem_add_ns", 00:11:08.705 "req_id": 1 00:11:08.705 } 00:11:08.705 Got JSON-RPC error response 00:11:08.705 response: 00:11:08.705 { 00:11:08.705 "code": -32602, 00:11:08.705 "message": "Invalid parameters" 00:11:08.705 } 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:08.705 Adding namespace failed - expected result. 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:08.705 test case2: host connect to nvmf target in multiple paths 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 [2024-11-20 12:25:14.346799] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.705 12:25:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:09.638 12:25:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:11.009 12:25:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.009 12:25:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.010 12:25:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.010 12:25:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.010 12:25:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:12.909 12:25:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:12.909 [global] 00:11:12.909 thread=1 00:11:12.909 invalidate=1 00:11:12.909 rw=write 00:11:12.909 time_based=1 00:11:12.909 runtime=1 00:11:12.909 ioengine=libaio 00:11:12.909 direct=1 00:11:12.909 bs=4096 00:11:12.909 iodepth=1 00:11:12.909 norandommap=0 00:11:12.909 numjobs=1 00:11:12.909 00:11:12.909 verify_dump=1 00:11:12.909 verify_backlog=512 00:11:12.909 verify_state_save=0 00:11:12.909 do_verify=1 00:11:12.909 verify=crc32c-intel 00:11:12.909 [job0] 00:11:12.909 filename=/dev/nvme0n1 00:11:12.909 Could not set queue depth (nvme0n1) 00:11:12.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.909 fio-3.35 00:11:12.909 Starting 1 thread 00:11:14.279 00:11:14.279 job0: (groupid=0, jobs=1): err= 0: pid=2728569: Wed Nov 20 12:25:19 2024 00:11:14.279 read: IOPS=5172, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1001msec) 00:11:14.279 slat (nsec): min=4531, max=44341, avg=10255.19, stdev=3530.09 00:11:14.279 clat (usec): min=67, max=178, avg=81.59, stdev= 7.52 00:11:14.279 lat (usec): min=72, max=192, avg=91.84, stdev= 9.20 00:11:14.279 clat percentiles (usec): 00:11:14.279 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 77], 00:11:14.279 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:11:14.279 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 92], 00:11:14.279 | 99.00th=[ 111], 99.50th=[ 127], 99.90th=[ 165], 99.95th=[ 176], 00:11:14.279 | 99.99th=[ 180] 00:11:14.279 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:11:14.279 slat (nsec): min=5318, max=39432, avg=11663.10, stdev=3335.90 00:11:14.279 clat (usec): min=62, max=184, avg=76.08, stdev= 6.07 00:11:14.279 lat (usec): min=68, max=205, avg=87.75, stdev= 7.80 00:11:14.279 clat percentiles (usec): 00:11:14.279 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:11:14.279 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 78], 00:11:14.279 | 70.00th=[ 79], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 86], 00:11:14.279 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 125], 99.95th=[ 155], 00:11:14.279 | 99.99th=[ 186] 00:11:14.279 bw ( KiB/s): min=23680, max=23680, per=100.00%, avg=23680.00, stdev= 0.00, samples=1 00:11:14.279 iops : min= 5922, max= 5922, avg=5922.00, stdev= 0.00, samples=1 00:11:14.279 lat (usec) : 100=98.99%, 250=1.01% 00:11:14.279 cpu : usr=7.40%, sys=15.50%, ctx=10810, majf=0, minf=1 00:11:14.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.279 issued rwts: total=5178,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.279 00:11:14.279 Run status group 0 (all jobs): 00:11:14.279 READ: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=20.2MiB (21.2MB), run=1001-1001msec 00:11:14.279 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=22.0MiB (23.1MB), run=1001-1001msec 00:11:14.279 00:11:14.279 Disk stats (read/write): 00:11:14.279 nvme0n1: ios=4675/5120, merge=0/0, ticks=361/341, in_queue=702, util=90.88% 00:11:14.279 12:25:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:16.176 rmmod nvme_rdma 00:11:16.176 rmmod nvme_fabrics 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2728099 ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2728099 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2728099 ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2728099 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728099 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728099' 00:11:16.176 killing process with pid 2728099 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2728099 00:11:16.176 12:25:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2728099 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:16.436 00:11:16.436 real 0m11.041s 00:11:16.436 user 0m33.617s 00:11:16.436 sys 0m2.683s 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:16.436 ************************************ 00:11:16.436 END TEST nvmf_nmic 00:11:16.436 ************************************ 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.436 ************************************ 00:11:16.436 START TEST nvmf_fio_target 00:11:16.436 ************************************ 00:11:16.436 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:16.696 * Looking for test storage... 00:11:16.696 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:16.696 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.697 --rc genhtml_branch_coverage=1 00:11:16.697 --rc genhtml_function_coverage=1 00:11:16.697 --rc genhtml_legend=1 00:11:16.697 --rc geninfo_all_blocks=1 00:11:16.697 --rc geninfo_unexecuted_blocks=1 00:11:16.697 00:11:16.697 ' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.697 --rc genhtml_branch_coverage=1 00:11:16.697 --rc genhtml_function_coverage=1 00:11:16.697 --rc genhtml_legend=1 00:11:16.697 --rc geninfo_all_blocks=1 00:11:16.697 --rc geninfo_unexecuted_blocks=1 00:11:16.697 00:11:16.697 ' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.697 --rc genhtml_branch_coverage=1 00:11:16.697 --rc genhtml_function_coverage=1 00:11:16.697 --rc genhtml_legend=1 00:11:16.697 --rc geninfo_all_blocks=1 00:11:16.697 --rc geninfo_unexecuted_blocks=1 00:11:16.697 00:11:16.697 ' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.697 --rc genhtml_branch_coverage=1 00:11:16.697 --rc genhtml_function_coverage=1 00:11:16.697 --rc genhtml_legend=1 00:11:16.697 --rc geninfo_all_blocks=1 00:11:16.697 --rc geninfo_unexecuted_blocks=1 00:11:16.697 00:11:16.697 ' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.697 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.697 12:25:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.240 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:19.241 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:19.241 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:19.241 Found net devices under 0000:83:00.0: mlx_0_0 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:19.241 Found net devices under 0000:83:00.1: mlx_0_1 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:19.241 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:19.242 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:19.242 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:19.242 altname enp131s0f0np0 00:11:19.242 inet 192.168.100.8/24 scope global mlx_0_0 00:11:19.242 valid_lft forever preferred_lft forever 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:19.242 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:19.242 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:19.242 altname enp131s0f1np1 00:11:19.242 inet 192.168.100.9/24 scope global mlx_0_1 00:11:19.242 valid_lft forever preferred_lft forever 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:19.242 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:19.243 192.168.100.9' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:19.243 192.168.100.9' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:19.243 192.168.100.9' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2730256 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2730256 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2730256 ']' 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.243 12:25:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.243 [2024-11-20 12:25:24.795894] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:19.243 [2024-11-20 12:25:24.796001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.243 [2024-11-20 12:25:24.871009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.243 [2024-11-20 12:25:24.935224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.243 [2024-11-20 12:25:24.935285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.243 [2024-11-20 12:25:24.935300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.243 [2024-11-20 12:25:24.935313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.243 [2024-11-20 12:25:24.935325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.243 [2024-11-20 12:25:24.936661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.243 [2024-11-20 12:25:24.936718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.243 [2024-11-20 12:25:24.936777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.243 [2024-11-20 12:25:24.936818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.502 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:19.761 [2024-11-20 12:25:25.445464] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd73df0/0xd782e0) succeed. 00:11:19.761 [2024-11-20 12:25:25.461072] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd75480/0xdb9980) succeed. 00:11:20.019 12:25:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.304 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:20.305 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.905 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:20.905 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.164 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:21.164 12:25:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.422 12:25:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:21.422 12:25:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:21.680 12:25:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.245 12:25:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:22.245 12:25:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.503 12:25:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:22.503 12:25:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.761 12:25:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:22.761 12:25:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:23.019 12:25:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.585 12:25:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:23.585 12:25:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.843 12:25:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:23.843 12:25:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.101 12:25:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:24.359 [2024-11-20 12:25:30.085810] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:24.359 12:25:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:24.925 12:25:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:25.184 12:25:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:26.119 12:25:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:28.651 12:25:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:28.651 [global] 00:11:28.651 thread=1 00:11:28.651 invalidate=1 00:11:28.651 rw=write 00:11:28.651 time_based=1 00:11:28.651 runtime=1 00:11:28.651 ioengine=libaio 00:11:28.651 direct=1 00:11:28.651 bs=4096 00:11:28.651 iodepth=1 00:11:28.651 norandommap=0 00:11:28.651 numjobs=1 00:11:28.651 00:11:28.651 verify_dump=1 00:11:28.651 verify_backlog=512 00:11:28.651 verify_state_save=0 00:11:28.651 do_verify=1 00:11:28.651 verify=crc32c-intel 00:11:28.651 [job0] 00:11:28.651 filename=/dev/nvme0n1 00:11:28.651 [job1] 00:11:28.651 filename=/dev/nvme0n2 00:11:28.651 [job2] 00:11:28.651 filename=/dev/nvme0n3 00:11:28.651 [job3] 00:11:28.651 filename=/dev/nvme0n4 00:11:28.651 Could not set queue depth (nvme0n1) 00:11:28.651 Could not set queue depth (nvme0n2) 00:11:28.651 Could not set queue depth (nvme0n3) 00:11:28.651 Could not set queue depth (nvme0n4) 00:11:28.651 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.651 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.651 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.651 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.651 fio-3.35 00:11:28.651 Starting 4 threads 00:11:29.588 00:11:29.588 job0: (groupid=0, jobs=1): err= 0: pid=2731153: Wed Nov 20 12:25:35 2024 00:11:29.588 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:29.588 slat (nsec): min=5011, max=46296, avg=11984.86, stdev=4919.32 00:11:29.588 clat (usec): min=77, max=308, avg=126.01, stdev=41.25 00:11:29.588 lat (usec): min=90, max=320, avg=137.99, stdev=41.45 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:11:29.588 | 30.00th=[ 98], 40.00th=[ 102], 50.00th=[ 106], 60.00th=[ 114], 00:11:29.588 | 70.00th=[ 145], 80.00th=[ 155], 90.00th=[ 204], 95.00th=[ 217], 00:11:29.588 | 99.00th=[ 233], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 293], 00:11:29.588 | 99.99th=[ 310] 00:11:29.588 write: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1001msec); 0 zone resets 00:11:29.588 slat (nsec): min=5617, max=59501, avg=13533.41, stdev=5948.84 00:11:29.588 clat (usec): min=71, max=280, avg=117.37, stdev=32.34 00:11:29.588 lat (usec): min=84, max=294, avg=130.91, stdev=32.70 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 89], 00:11:29.588 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 125], 00:11:29.588 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 169], 00:11:29.588 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 251], 99.95th=[ 265], 00:11:29.588 | 99.99th=[ 281] 00:11:29.588 bw ( KiB/s): min=16384, max=16384, per=29.08%, avg=16384.00, stdev= 0.00, samples=1 00:11:29.588 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:29.588 lat (usec) : 100=40.67%, 250=59.02%, 500=0.30% 00:11:29.588 cpu : usr=4.20%, sys=13.00%, ctx=7260, majf=0, minf=7 00:11:29.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 issued rwts: total=3584,3674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.588 job1: (groupid=0, jobs=1): err= 0: pid=2731154: Wed Nov 20 12:25:35 2024 00:11:29.588 read: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1001msec) 00:11:29.588 slat (nsec): min=4621, max=47263, avg=11267.67, stdev=4199.84 00:11:29.588 clat (usec): min=77, max=353, avg=136.36, stdev=48.10 00:11:29.588 lat (usec): min=91, max=378, avg=147.62, stdev=49.01 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:11:29.588 | 30.00th=[ 98], 40.00th=[ 103], 50.00th=[ 118], 60.00th=[ 149], 00:11:29.588 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 212], 95.00th=[ 225], 00:11:29.588 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 351], 00:11:29.588 | 99.99th=[ 355] 00:11:29.588 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:29.588 slat (nsec): min=5551, max=63366, avg=11774.49, stdev=4940.74 00:11:29.588 clat (usec): min=75, max=260, avg=115.81, stdev=30.44 00:11:29.588 lat (usec): min=84, max=289, avg=127.58, stdev=30.04 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:11:29.588 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 116], 00:11:29.588 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 165], 00:11:29.588 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 258], 00:11:29.588 | 99.99th=[ 262] 00:11:29.588 bw ( KiB/s): min=16336, max=16336, per=29.00%, avg=16336.00, stdev= 0.00, samples=1 00:11:29.588 iops : min= 4084, max= 4084, avg=4084.00, stdev= 0.00, samples=1 00:11:29.588 lat (usec) : 100=40.37%, 250=58.72%, 500=0.91% 00:11:29.588 cpu : usr=6.10%, sys=9.80%, ctx=7112, majf=0, minf=2 00:11:29.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 issued rwts: total=3528,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.588 job2: (groupid=0, jobs=1): err= 0: pid=2731155: Wed Nov 20 12:25:35 2024 00:11:29.588 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:29.588 slat (nsec): min=5505, max=89827, avg=11264.07, stdev=4382.35 00:11:29.588 clat (usec): min=87, max=331, avg=147.89, stdev=45.97 00:11:29.588 lat (usec): min=93, max=340, avg=159.16, stdev=47.82 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 105], 00:11:29.588 | 30.00th=[ 112], 40.00th=[ 120], 50.00th=[ 145], 60.00th=[ 153], 00:11:29.588 | 70.00th=[ 163], 80.00th=[ 188], 90.00th=[ 221], 95.00th=[ 235], 00:11:29.588 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 302], 99.95th=[ 310], 00:11:29.588 | 99.99th=[ 330] 00:11:29.588 write: IOPS=3253, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:11:29.588 slat (nsec): min=5816, max=47930, avg=13423.28, stdev=5415.30 00:11:29.588 clat (usec): min=83, max=297, avg=137.16, stdev=39.24 00:11:29.588 lat (usec): min=90, max=318, avg=150.59, stdev=41.08 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 100], 00:11:29.588 | 30.00th=[ 106], 40.00th=[ 117], 50.00th=[ 139], 60.00th=[ 147], 00:11:29.588 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 200], 95.00th=[ 212], 00:11:29.588 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 289], 00:11:29.588 | 99.99th=[ 297] 00:11:29.588 bw ( KiB/s): min=16384, max=16384, per=29.08%, avg=16384.00, stdev= 0.00, samples=1 00:11:29.588 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:29.588 lat (usec) : 100=15.47%, 250=82.86%, 500=1.67% 00:11:29.588 cpu : usr=4.00%, sys=10.00%, ctx=6330, majf=0, minf=6 00:11:29.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 issued rwts: total=3072,3257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.588 job3: (groupid=0, jobs=1): err= 0: pid=2731156: Wed Nov 20 12:25:35 2024 00:11:29.588 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:11:29.588 slat (nsec): min=4717, max=63980, avg=11599.61, stdev=4998.21 00:11:29.588 clat (usec): min=89, max=335, avg=137.40, stdev=43.24 00:11:29.588 lat (usec): min=94, max=340, avg=149.00, stdev=44.14 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 95], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:11:29.588 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 117], 60.00th=[ 129], 00:11:29.588 | 70.00th=[ 153], 80.00th=[ 165], 90.00th=[ 215], 95.00th=[ 227], 00:11:29.588 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 314], 00:11:29.588 | 99.99th=[ 334] 00:11:29.588 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:29.588 slat (nsec): min=5550, max=58954, avg=14050.02, stdev=6017.19 00:11:29.588 clat (usec): min=83, max=553, avg=127.86, stdev=42.42 00:11:29.588 lat (usec): min=89, max=574, avg=141.91, stdev=43.64 00:11:29.588 clat percentiles (usec): 00:11:29.588 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:11:29.588 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 115], 00:11:29.588 | 70.00th=[ 137], 80.00th=[ 155], 90.00th=[ 206], 95.00th=[ 219], 00:11:29.588 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 302], 00:11:29.588 | 99.99th=[ 553] 00:11:29.588 bw ( KiB/s): min=16384, max=16384, per=29.08%, avg=16384.00, stdev= 0.00, samples=1 00:11:29.588 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:29.588 lat (usec) : 100=18.21%, 250=80.38%, 500=1.40%, 750=0.01% 00:11:29.588 cpu : usr=5.10%, sys=10.60%, ctx=6736, majf=0, minf=3 00:11:29.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.588 issued rwts: total=3148,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.588 00:11:29.588 Run status group 0 (all jobs): 00:11:29.588 READ: bw=52.0MiB/s (54.6MB/s), 12.0MiB/s-14.0MiB/s (12.6MB/s-14.7MB/s), io=52.1MiB (54.6MB), run=1001-1001msec 00:11:29.588 WRITE: bw=55.0MiB/s (57.7MB/s), 12.7MiB/s-14.3MiB/s (13.3MB/s-15.0MB/s), io=55.1MiB (57.7MB), run=1001-1001msec 00:11:29.588 00:11:29.588 Disk stats (read/write): 00:11:29.588 nvme0n1: ios=3122/3534, merge=0/0, ticks=313/371, in_queue=684, util=87.37% 00:11:29.589 nvme0n2: ios=2778/3072, merge=0/0, ticks=412/345, in_queue=757, util=87.85% 00:11:29.589 nvme0n3: ios=2632/3072, merge=0/0, ticks=354/397, in_queue=751, util=89.35% 00:11:29.589 nvme0n4: ios=3015/3072, merge=0/0, ticks=400/347, in_queue=747, util=89.81% 00:11:29.589 12:25:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:29.589 [global] 00:11:29.589 thread=1 00:11:29.589 invalidate=1 00:11:29.589 rw=randwrite 00:11:29.589 time_based=1 00:11:29.589 runtime=1 00:11:29.589 ioengine=libaio 00:11:29.589 direct=1 00:11:29.589 bs=4096 00:11:29.589 iodepth=1 00:11:29.589 norandommap=0 00:11:29.589 numjobs=1 00:11:29.589 00:11:29.589 verify_dump=1 00:11:29.589 verify_backlog=512 00:11:29.589 verify_state_save=0 00:11:29.589 do_verify=1 00:11:29.589 verify=crc32c-intel 00:11:29.589 [job0] 00:11:29.589 filename=/dev/nvme0n1 00:11:29.589 [job1] 00:11:29.589 filename=/dev/nvme0n2 00:11:29.589 [job2] 00:11:29.589 filename=/dev/nvme0n3 00:11:29.589 [job3] 00:11:29.589 filename=/dev/nvme0n4 00:11:29.589 Could not set queue depth (nvme0n1) 00:11:29.589 Could not set queue depth (nvme0n2) 00:11:29.589 Could not set queue depth (nvme0n3) 00:11:29.589 Could not set queue depth (nvme0n4) 00:11:29.847 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.847 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.847 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.847 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.847 fio-3.35 00:11:29.847 Starting 4 threads 00:11:31.225 00:11:31.225 job0: (groupid=0, jobs=1): err= 0: pid=2731326: Wed Nov 20 12:25:36 2024 00:11:31.225 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:31.225 slat (nsec): min=4814, max=36032, avg=12335.00, stdev=4653.41 00:11:31.225 clat (usec): min=73, max=272, avg=116.43, stdev=28.37 00:11:31.225 lat (usec): min=92, max=283, avg=128.76, stdev=26.99 00:11:31.225 clat percentiles (usec): 00:11:31.225 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:11:31.225 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 108], 00:11:31.225 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:11:31.225 | 99.00th=[ 198], 99.50th=[ 215], 99.90th=[ 239], 99.95th=[ 243], 00:11:31.225 | 99.99th=[ 273] 00:11:31.225 write: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:11:31.225 slat (nsec): min=5239, max=55744, avg=14905.01, stdev=5718.95 00:11:31.225 clat (usec): min=75, max=356, avg=117.20, stdev=30.07 00:11:31.225 lat (usec): min=81, max=389, avg=132.11, stdev=28.31 00:11:31.225 clat percentiles (usec): 00:11:31.225 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:11:31.226 | 30.00th=[ 93], 40.00th=[ 98], 50.00th=[ 105], 60.00th=[ 128], 00:11:31.226 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:11:31.226 | 99.00th=[ 202], 99.50th=[ 219], 99.90th=[ 247], 99.95th=[ 285], 00:11:31.226 | 99.99th=[ 359] 00:11:31.226 bw ( KiB/s): min=14392, max=14392, per=23.05%, avg=14392.00, stdev= 0.00, samples=1 00:11:31.226 iops : min= 3598, max= 3598, avg=3598.00, stdev= 0.00, samples=1 00:11:31.226 lat (usec) : 100=44.45%, 250=55.50%, 500=0.05% 00:11:31.226 cpu : usr=6.40%, sys=12.10%, ctx=7449, majf=0, minf=11 00:11:31.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 issued rwts: total=3584,3865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.226 job1: (groupid=0, jobs=1): err= 0: pid=2731332: Wed Nov 20 12:25:36 2024 00:11:31.226 read: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1002msec) 00:11:31.226 slat (nsec): min=4758, max=37983, avg=12022.63, stdev=4791.99 00:11:31.226 clat (usec): min=78, max=233, avg=106.35, stdev=17.92 00:11:31.226 lat (usec): min=90, max=241, avg=118.37, stdev=16.43 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 85], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:11:31.226 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 103], 00:11:31.226 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 143], 95.00th=[ 147], 00:11:31.226 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 192], 99.95th=[ 223], 00:11:31.226 | 99.99th=[ 235] 00:11:31.226 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:31.226 slat (nsec): min=5195, max=59396, avg=14325.74, stdev=5321.55 00:11:31.226 clat (usec): min=77, max=518, avg=105.39, stdev=22.34 00:11:31.226 lat (usec): min=91, max=525, avg=119.72, stdev=22.14 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:11:31.226 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 100], 00:11:31.226 | 70.00th=[ 106], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 145], 00:11:31.226 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 221], 99.95th=[ 241], 00:11:31.226 | 99.99th=[ 519] 00:11:31.226 bw ( KiB/s): min=16384, max=16384, per=26.24%, avg=16384.00, stdev= 0.00, samples=2 00:11:31.226 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:31.226 lat (usec) : 100=54.34%, 250=45.64%, 500=0.01%, 750=0.01% 00:11:31.226 cpu : usr=6.59%, sys=12.99%, ctx=8159, majf=0, minf=10 00:11:31.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 issued rwts: total=4062,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.226 job2: (groupid=0, jobs=1): err= 0: pid=2731357: Wed Nov 20 12:25:36 2024 00:11:31.226 read: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec) 00:11:31.226 slat (nsec): min=4924, max=44902, avg=12608.25, stdev=5304.33 00:11:31.226 clat (usec): min=95, max=270, avg=132.70, stdev=26.57 00:11:31.226 lat (usec): min=103, max=305, avg=145.31, stdev=26.99 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 112], 00:11:31.226 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 129], 00:11:31.226 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:11:31.226 | 99.00th=[ 219], 99.50th=[ 237], 99.90th=[ 258], 99.95th=[ 269], 00:11:31.226 | 99.99th=[ 273] 00:11:31.226 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:31.226 slat (nsec): min=5374, max=62656, avg=14229.36, stdev=5631.97 00:11:31.226 clat (usec): min=89, max=272, avg=121.80, stdev=22.75 00:11:31.226 lat (usec): min=97, max=318, avg=136.03, stdev=24.48 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:11:31.226 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 118], 00:11:31.226 | 70.00th=[ 130], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:11:31.226 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 245], 00:11:31.226 | 99.99th=[ 273] 00:11:31.226 bw ( KiB/s): min=14776, max=14776, per=23.66%, avg=14776.00, stdev= 0.00, samples=1 00:11:31.226 iops : min= 3694, max= 3694, avg=3694.00, stdev= 0.00, samples=1 00:11:31.226 lat (usec) : 100=4.42%, 250=95.46%, 500=0.12% 00:11:31.226 cpu : usr=4.80%, sys=12.40%, ctx=6946, majf=0, minf=5 00:11:31.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 issued rwts: total=3361,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.226 job3: (groupid=0, jobs=1): err= 0: pid=2731364: Wed Nov 20 12:25:36 2024 00:11:31.226 read: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:31.226 slat (nsec): min=5023, max=47410, avg=12225.19, stdev=4578.15 00:11:31.226 clat (usec): min=88, max=199, avg=112.27, stdev=10.07 00:11:31.226 lat (usec): min=96, max=233, avg=124.49, stdev=11.99 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 105], 00:11:31.226 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:11:31.226 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 130], 00:11:31.226 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 172], 99.95th=[ 190], 00:11:31.226 | 99.99th=[ 200] 00:11:31.226 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:31.226 slat (nsec): min=5563, max=53158, avg=14660.12, stdev=5712.52 00:11:31.226 clat (usec): min=82, max=362, avg=113.64, stdev=22.81 00:11:31.226 lat (usec): min=91, max=407, avg=128.30, stdev=24.69 00:11:31.226 clat percentiles (usec): 00:11:31.226 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:11:31.226 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:11:31.226 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 149], 95.00th=[ 163], 00:11:31.226 | 99.00th=[ 190], 99.50th=[ 210], 99.90th=[ 245], 99.95th=[ 262], 00:11:31.226 | 99.99th=[ 363] 00:11:31.226 bw ( KiB/s): min=16384, max=16384, per=26.24%, avg=16384.00, stdev= 0.00, samples=1 00:11:31.226 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:31.226 lat (usec) : 100=16.86%, 250=83.11%, 500=0.04% 00:11:31.226 cpu : usr=6.10%, sys=12.80%, ctx=7690, majf=0, minf=9 00:11:31.226 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.226 issued rwts: total=3593,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.226 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.226 00:11:31.226 Run status group 0 (all jobs): 00:11:31.226 READ: bw=56.9MiB/s (59.7MB/s), 13.1MiB/s-15.8MiB/s (13.8MB/s-16.6MB/s), io=57.0MiB (59.8MB), run=1001-1002msec 00:11:31.226 WRITE: bw=61.0MiB/s (63.9MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=61.1MiB (64.1MB), run=1001-1002msec 00:11:31.226 00:11:31.226 Disk stats (read/write): 00:11:31.226 nvme0n1: ios=3122/3358, merge=0/0, ticks=352/355, in_queue=707, util=87.37% 00:11:31.226 nvme0n2: ios=3389/3584, merge=0/0, ticks=342/352, in_queue=694, util=87.45% 00:11:31.226 nvme0n3: ios=2957/3072, merge=0/0, ticks=369/348, in_queue=717, util=89.16% 00:11:31.226 nvme0n4: ios=3072/3574, merge=0/0, ticks=306/377, in_queue=683, util=89.72% 00:11:31.226 12:25:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:31.226 [global] 00:11:31.226 thread=1 00:11:31.226 invalidate=1 00:11:31.226 rw=write 00:11:31.226 time_based=1 00:11:31.226 runtime=1 00:11:31.226 ioengine=libaio 00:11:31.226 direct=1 00:11:31.226 bs=4096 00:11:31.226 iodepth=128 00:11:31.226 norandommap=0 00:11:31.226 numjobs=1 00:11:31.226 00:11:31.226 verify_dump=1 00:11:31.226 verify_backlog=512 00:11:31.226 verify_state_save=0 00:11:31.226 do_verify=1 00:11:31.226 verify=crc32c-intel 00:11:31.226 [job0] 00:11:31.226 filename=/dev/nvme0n1 00:11:31.227 [job1] 00:11:31.227 filename=/dev/nvme0n2 00:11:31.227 [job2] 00:11:31.227 filename=/dev/nvme0n3 00:11:31.227 [job3] 00:11:31.227 filename=/dev/nvme0n4 00:11:31.227 Could not set queue depth (nvme0n1) 00:11:31.227 Could not set queue depth (nvme0n2) 00:11:31.227 Could not set queue depth (nvme0n3) 00:11:31.227 Could not set queue depth (nvme0n4) 00:11:31.227 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.227 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.227 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.227 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.227 fio-3.35 00:11:31.227 Starting 4 threads 00:11:32.602 00:11:32.602 job0: (groupid=0, jobs=1): err= 0: pid=2731567: Wed Nov 20 12:25:38 2024 00:11:32.602 read: IOPS=8678, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1002msec) 00:11:32.602 slat (usec): min=3, max=1132, avg=57.31, stdev=199.88 00:11:32.602 clat (usec): min=711, max=10006, avg=7509.93, stdev=787.64 00:11:32.602 lat (usec): min=1710, max=10711, avg=7567.24, stdev=799.19 00:11:32.602 clat percentiles (usec): 00:11:32.602 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6915], 00:11:32.602 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7635], 00:11:32.602 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 8848], 00:11:32.602 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[ 9896], 99.95th=[ 9896], 00:11:32.602 | 99.99th=[10028] 00:11:32.602 write: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec); 0 zone resets 00:11:32.602 slat (usec): min=3, max=1099, avg=52.73, stdev=180.31 00:11:32.602 clat (usec): min=5881, max=9786, avg=7064.95, stdev=626.35 00:11:32.602 lat (usec): min=5885, max=9799, avg=7117.68, stdev=640.18 00:11:32.602 clat percentiles (usec): 00:11:32.602 | 1.00th=[ 6128], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6521], 00:11:32.602 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:11:32.602 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8225], 00:11:32.602 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[ 9503], 99.95th=[ 9765], 00:11:32.602 | 99.99th=[ 9765] 00:11:32.602 bw ( KiB/s): min=32768, max=36864, per=44.26%, avg=34816.00, stdev=2896.31, samples=2 00:11:32.602 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:11:32.602 lat (usec) : 750=0.01% 00:11:32.602 lat (msec) : 2=0.09%, 4=0.14%, 10=99.76%, 20=0.01% 00:11:32.602 cpu : usr=6.39%, sys=9.49%, ctx=1146, majf=0, minf=20 00:11:32.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:32.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.603 issued rwts: total=8696,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.603 job1: (groupid=0, jobs=1): err= 0: pid=2731569: Wed Nov 20 12:25:38 2024 00:11:32.603 read: IOPS=3496, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1002msec) 00:11:32.603 slat (usec): min=3, max=4862, avg=136.99, stdev=493.11 00:11:32.603 clat (usec): min=1319, max=26049, avg=17526.48, stdev=7601.48 00:11:32.603 lat (usec): min=1343, max=26063, avg=17663.47, stdev=7643.53 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[ 3720], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8586], 00:11:32.603 | 30.00th=[ 9372], 40.00th=[15270], 50.00th=[18220], 60.00th=[24511], 00:11:32.603 | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[25822], 00:11:32.603 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:11:32.603 | 99.99th=[26084] 00:11:32.603 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:32.603 slat (usec): min=4, max=3296, avg=136.29, stdev=457.22 00:11:32.603 clat (usec): min=7145, max=25047, avg=18137.88, stdev=6515.07 00:11:32.603 lat (usec): min=7150, max=25058, avg=18274.18, stdev=6551.30 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8717], 00:11:32.603 | 30.00th=[15926], 40.00th=[17433], 50.00th=[22676], 60.00th=[23200], 00:11:32.603 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23725], 95.00th=[24249], 00:11:32.603 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:11:32.603 | 99.99th=[25035] 00:11:32.603 bw ( KiB/s): min=12135, max=16512, per=18.21%, avg=14323.50, stdev=3095.01, samples=2 00:11:32.603 iops : min= 3033, max= 4128, avg=3580.50, stdev=774.28, samples=2 00:11:32.603 lat (msec) : 2=0.20%, 4=0.45%, 10=29.29%, 20=19.88%, 50=50.18% 00:11:32.603 cpu : usr=2.00%, sys=5.59%, ctx=933, majf=0, minf=11 00:11:32.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:32.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.603 issued rwts: total=3503,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.603 job2: (groupid=0, jobs=1): err= 0: pid=2731571: Wed Nov 20 12:25:38 2024 00:11:32.603 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:11:32.603 slat (usec): min=3, max=4632, avg=132.61, stdev=546.62 00:11:32.603 clat (usec): min=5927, max=26003, avg=17103.22, stdev=7267.32 00:11:32.603 lat (usec): min=6482, max=26041, avg=17235.82, stdev=7306.51 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10290], 00:11:32.603 | 30.00th=[10552], 40.00th=[11076], 50.00th=[13042], 60.00th=[23987], 00:11:32.603 | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[25822], 00:11:32.603 | 99.00th=[25822], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:11:32.603 | 99.99th=[26084] 00:11:32.603 write: IOPS=3920, BW=15.3MiB/s (16.1MB/s)(15.3MiB/1002msec); 0 zone resets 00:11:32.603 slat (usec): min=4, max=4876, avg=127.61, stdev=518.40 00:11:32.603 clat (usec): min=1463, max=25312, avg=16508.21, stdev=6941.78 00:11:32.603 lat (usec): min=1950, max=25317, avg=16635.81, stdev=6978.03 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[ 4686], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9634], 00:11:32.603 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[17957], 60.00th=[23200], 00:11:32.603 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23725], 95.00th=[24249], 00:11:32.603 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:11:32.603 | 99.99th=[25297] 00:11:32.603 bw ( KiB/s): min=12040, max=18368, per=19.33%, avg=15204.00, stdev=4474.57, samples=2 00:11:32.603 iops : min= 3010, max= 4592, avg=3801.00, stdev=1118.64, samples=2 00:11:32.603 lat (msec) : 2=0.07%, 4=0.37%, 10=25.43%, 20=28.62%, 50=45.51% 00:11:32.603 cpu : usr=3.40%, sys=5.39%, ctx=605, majf=0, minf=9 00:11:32.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:32.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.603 issued rwts: total=3584,3928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.603 job3: (groupid=0, jobs=1): err= 0: pid=2731572: Wed Nov 20 12:25:38 2024 00:11:32.603 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:11:32.603 slat (usec): min=3, max=4454, avg=153.76, stdev=462.75 00:11:32.603 clat (usec): min=8468, max=25997, avg=19919.77, stdev=5880.10 00:11:32.603 lat (usec): min=9798, max=26009, avg=20073.52, stdev=5908.71 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[10028], 5.00th=[10421], 10.00th=[10945], 20.00th=[12780], 00:11:32.603 | 30.00th=[16057], 40.00th=[18220], 50.00th=[22414], 60.00th=[24773], 00:11:32.603 | 70.00th=[25560], 80.00th=[25560], 90.00th=[25822], 95.00th=[25822], 00:11:32.603 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:11:32.603 | 99.99th=[26084] 00:11:32.603 write: IOPS=3499, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1003msec); 0 zone resets 00:11:32.603 slat (usec): min=4, max=3765, avg=144.24, stdev=407.08 00:11:32.603 clat (usec): min=786, max=25293, avg=18559.41, stdev=5886.78 00:11:32.603 lat (usec): min=1991, max=25303, avg=18703.65, stdev=5916.35 00:11:32.603 clat percentiles (usec): 00:11:32.603 | 1.00th=[ 3884], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10945], 00:11:32.603 | 30.00th=[16188], 40.00th=[17433], 50.00th=[22414], 60.00th=[23200], 00:11:32.603 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23725], 95.00th=[24249], 00:11:32.603 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:11:32.603 | 99.99th=[25297] 00:11:32.603 bw ( KiB/s): min=12040, max=15016, per=17.20%, avg=13528.00, stdev=2104.35, samples=2 00:11:32.603 iops : min= 3010, max= 3754, avg=3382.00, stdev=526.09, samples=2 00:11:32.603 lat (usec) : 1000=0.02% 00:11:32.603 lat (msec) : 2=0.02%, 4=0.59%, 10=5.76%, 20=40.49%, 50=53.13% 00:11:32.603 cpu : usr=3.19%, sys=5.59%, ctx=954, majf=0, minf=10 00:11:32.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:32.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.603 issued rwts: total=3072,3510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.603 00:11:32.603 Run status group 0 (all jobs): 00:11:32.603 READ: bw=73.5MiB/s (77.1MB/s), 12.0MiB/s-33.9MiB/s (12.6MB/s-35.5MB/s), io=73.7MiB (77.2MB), run=1002-1002msec 00:11:32.603 WRITE: bw=76.8MiB/s (80.6MB/s), 13.7MiB/s-33.9MiB/s (14.3MB/s-35.6MB/s), io=77.1MiB (80.8MB), run=1002-1003msec 00:11:32.603 00:11:32.603 Disk stats (read/write): 00:11:32.603 nvme0n1: ios=7669/7680, merge=0/0, ticks=13364/12333, in_queue=25697, util=87.78% 00:11:32.603 nvme0n2: ios=2485/2560, merge=0/0, ticks=13457/13778, in_queue=27235, util=87.25% 00:11:32.603 nvme0n3: ios=2838/3072, merge=0/0, ticks=13241/13898, in_queue=27139, util=89.26% 00:11:32.603 nvme0n4: ios=2445/2560, merge=0/0, ticks=13603/13763, in_queue=27366, util=89.72% 00:11:32.603 12:25:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:32.603 [global] 00:11:32.603 thread=1 00:11:32.603 invalidate=1 00:11:32.603 rw=randwrite 00:11:32.603 time_based=1 00:11:32.603 runtime=1 00:11:32.603 ioengine=libaio 00:11:32.603 direct=1 00:11:32.603 bs=4096 00:11:32.603 iodepth=128 00:11:32.603 norandommap=0 00:11:32.603 numjobs=1 00:11:32.603 00:11:32.603 verify_dump=1 00:11:32.603 verify_backlog=512 00:11:32.603 verify_state_save=0 00:11:32.603 do_verify=1 00:11:32.603 verify=crc32c-intel 00:11:32.603 [job0] 00:11:32.603 filename=/dev/nvme0n1 00:11:32.603 [job1] 00:11:32.603 filename=/dev/nvme0n2 00:11:32.603 [job2] 00:11:32.603 filename=/dev/nvme0n3 00:11:32.603 [job3] 00:11:32.603 filename=/dev/nvme0n4 00:11:32.603 Could not set queue depth (nvme0n1) 00:11:32.603 Could not set queue depth (nvme0n2) 00:11:32.603 Could not set queue depth (nvme0n3) 00:11:32.603 Could not set queue depth (nvme0n4) 00:11:32.603 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.603 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.603 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.603 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.603 fio-3.35 00:11:32.603 Starting 4 threads 00:11:33.981 00:11:33.981 job0: (groupid=0, jobs=1): err= 0: pid=2731730: Wed Nov 20 12:25:39 2024 00:11:33.981 read: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1005msec) 00:11:33.981 slat (usec): min=3, max=8188, avg=100.39, stdev=429.37 00:11:33.981 clat (usec): min=2966, max=33092, avg=13002.23, stdev=7699.29 00:11:33.981 lat (usec): min=2970, max=33101, avg=13102.62, stdev=7750.43 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 4228], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8291], 00:11:33.981 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:11:33.981 | 70.00th=[11338], 80.00th=[25035], 90.00th=[27395], 95.00th=[27919], 00:11:33.981 | 99.00th=[29492], 99.50th=[30540], 99.90th=[32113], 99.95th=[33162], 00:11:33.981 | 99.99th=[33162] 00:11:33.981 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:11:33.981 slat (usec): min=4, max=4467, avg=89.06, stdev=354.62 00:11:33.981 clat (usec): min=4726, max=30719, avg=11995.79, stdev=7486.16 00:11:33.981 lat (usec): min=5452, max=30727, avg=12084.85, stdev=7539.30 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 6194], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7767], 00:11:33.981 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8455], 00:11:33.981 | 70.00th=[ 8586], 80.00th=[15008], 90.00th=[26870], 95.00th=[27657], 00:11:33.981 | 99.00th=[28443], 99.50th=[28443], 99.90th=[30540], 99.95th=[30540], 00:11:33.981 | 99.99th=[30802] 00:11:33.981 bw ( KiB/s): min=12288, max=28672, per=21.40%, avg=20480.00, stdev=11585.24, samples=2 00:11:33.981 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:11:33.981 lat (msec) : 4=0.43%, 10=72.00%, 20=6.54%, 50=21.03% 00:11:33.981 cpu : usr=4.98%, sys=7.37%, ctx=769, majf=0, minf=21 00:11:33.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:33.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.981 issued rwts: total=5070,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.981 job1: (groupid=0, jobs=1): err= 0: pid=2731731: Wed Nov 20 12:25:39 2024 00:11:33.981 read: IOPS=7523, BW=29.4MiB/s (30.8MB/s)(29.4MiB/1002msec) 00:11:33.981 slat (usec): min=3, max=7346, avg=63.73, stdev=263.96 00:11:33.981 clat (usec): min=493, max=19191, avg=8767.46, stdev=1944.20 00:11:33.981 lat (usec): min=1276, max=19756, avg=8831.19, stdev=1947.96 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 7963], 00:11:33.981 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8455], 00:11:33.981 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[12387], 00:11:33.981 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19268], 00:11:33.981 | 99.99th=[19268] 00:11:33.981 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:11:33.981 slat (usec): min=4, max=6541, avg=58.23, stdev=226.18 00:11:33.981 clat (usec): min=1994, max=20647, avg=7940.96, stdev=1289.26 00:11:33.981 lat (usec): min=2007, max=20653, avg=7999.18, stdev=1293.09 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 5473], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 7373], 00:11:33.981 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:11:33.981 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9241], 00:11:33.981 | 99.00th=[12649], 99.50th=[16581], 99.90th=[20055], 99.95th=[20579], 00:11:33.981 | 99.99th=[20579] 00:11:33.981 bw ( KiB/s): min=28672, max=32768, per=32.11%, avg=30720.00, stdev=2896.31, samples=2 00:11:33.981 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:11:33.981 lat (usec) : 500=0.01% 00:11:33.981 lat (msec) : 2=0.12%, 4=0.47%, 10=92.42%, 20=6.91%, 50=0.08% 00:11:33.981 cpu : usr=6.29%, sys=11.89%, ctx=957, majf=0, minf=14 00:11:33.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:33.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.981 issued rwts: total=7539,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.981 job2: (groupid=0, jobs=1): err= 0: pid=2731732: Wed Nov 20 12:25:39 2024 00:11:33.981 read: IOPS=5733, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1002msec) 00:11:33.981 slat (usec): min=3, max=3814, avg=79.13, stdev=296.58 00:11:33.981 clat (usec): min=605, max=29289, avg=10368.58, stdev=2079.14 00:11:33.981 lat (usec): min=1181, max=29307, avg=10447.72, stdev=2086.58 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 3851], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9765], 00:11:33.981 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10159], 00:11:33.981 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[12256], 00:11:33.981 | 99.00th=[19006], 99.50th=[22938], 99.90th=[29230], 99.95th=[29230], 00:11:33.981 | 99.99th=[29230] 00:11:33.981 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:11:33.981 slat (usec): min=4, max=6427, avg=80.83, stdev=314.10 00:11:33.981 clat (usec): min=2386, max=33320, avg=10995.35, stdev=4853.16 00:11:33.981 lat (usec): min=2406, max=33330, avg=11076.19, stdev=4882.64 00:11:33.981 clat percentiles (usec): 00:11:33.981 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9241], 00:11:33.981 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:33.981 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[27395], 00:11:33.981 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32113], 99.95th=[32637], 00:11:33.981 | 99.99th=[33424] 00:11:33.981 bw ( KiB/s): min=23408, max=25632, per=25.63%, avg=24520.00, stdev=1572.61, samples=2 00:11:33.981 iops : min= 5852, max= 6408, avg=6130.00, stdev=393.15, samples=2 00:11:33.981 lat (usec) : 750=0.01%, 1000=0.01% 00:11:33.981 lat (msec) : 2=0.11%, 4=0.45%, 10=50.36%, 20=45.24%, 50=3.83% 00:11:33.981 cpu : usr=4.70%, sys=9.69%, ctx=779, majf=0, minf=7 00:11:33.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:33.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.981 issued rwts: total=5745,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.981 job3: (groupid=0, jobs=1): err= 0: pid=2731733: Wed Nov 20 12:25:39 2024 00:11:33.981 read: IOPS=4929, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1006msec) 00:11:33.981 slat (usec): min=3, max=12113, avg=100.79, stdev=415.35 00:11:33.981 clat (usec): min=4420, max=37426, avg=13296.70, stdev=3500.52 00:11:33.982 lat (usec): min=5276, max=37435, avg=13397.49, stdev=3524.03 00:11:33.982 clat percentiles (usec): 00:11:33.982 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:11:33.982 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13698], 60.00th=[13960], 00:11:33.982 | 70.00th=[14091], 80.00th=[14615], 90.00th=[16057], 95.00th=[17171], 00:11:33.982 | 99.00th=[27395], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:11:33.982 | 99.99th=[37487] 00:11:33.982 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:11:33.982 slat (usec): min=4, max=6041, avg=89.73, stdev=321.48 00:11:33.982 clat (usec): min=4106, max=17976, avg=11957.07, stdev=2150.91 00:11:33.982 lat (usec): min=4113, max=18848, avg=12046.80, stdev=2176.45 00:11:33.982 clat percentiles (usec): 00:11:33.982 | 1.00th=[ 6063], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:11:33.982 | 30.00th=[10290], 40.00th=[10683], 50.00th=[12518], 60.00th=[13173], 00:11:33.982 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:11:33.982 | 99.00th=[16450], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:11:33.982 | 99.99th=[17957] 00:11:33.982 bw ( KiB/s): min=18184, max=22776, per=21.40%, avg=20480.00, stdev=3247.03, samples=2 00:11:33.982 iops : min= 4546, max= 5694, avg=5120.00, stdev=811.76, samples=2 00:11:33.982 lat (msec) : 10=8.19%, 20=90.55%, 50=1.26% 00:11:33.982 cpu : usr=4.58%, sys=9.25%, ctx=774, majf=0, minf=9 00:11:33.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:33.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.982 issued rwts: total=4959,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.982 00:11:33.982 Run status group 0 (all jobs): 00:11:33.982 READ: bw=90.5MiB/s (94.9MB/s), 19.3MiB/s-29.4MiB/s (20.2MB/s-30.8MB/s), io=91.1MiB (95.5MB), run=1002-1006msec 00:11:33.982 WRITE: bw=93.4MiB/s (98.0MB/s), 19.9MiB/s-29.9MiB/s (20.8MB/s-31.4MB/s), io=94.0MiB (98.6MB), run=1002-1006msec 00:11:33.982 00:11:33.982 Disk stats (read/write): 00:11:33.982 nvme0n1: ios=4658/4917, merge=0/0, ticks=20860/20158, in_queue=41018, util=87.07% 00:11:33.982 nvme0n2: ios=6316/6656, merge=0/0, ticks=33432/30679, in_queue=64111, util=87.56% 00:11:33.982 nvme0n3: ios=4918/5120, merge=0/0, ticks=33630/34273, in_queue=67903, util=89.06% 00:11:33.982 nvme0n4: ios=4149/4608, merge=0/0, ticks=21333/20973, in_queue=42306, util=89.32% 00:11:33.982 12:25:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:33.982 12:25:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2731832 00:11:33.982 12:25:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:33.982 12:25:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:33.982 [global] 00:11:33.982 thread=1 00:11:33.982 invalidate=1 00:11:33.982 rw=read 00:11:33.982 time_based=1 00:11:33.982 runtime=10 00:11:33.982 ioengine=libaio 00:11:33.982 direct=1 00:11:33.982 bs=4096 00:11:33.982 iodepth=1 00:11:33.982 norandommap=1 00:11:33.982 numjobs=1 00:11:33.982 00:11:33.982 [job0] 00:11:33.982 filename=/dev/nvme0n1 00:11:33.982 [job1] 00:11:33.982 filename=/dev/nvme0n2 00:11:33.982 [job2] 00:11:33.982 filename=/dev/nvme0n3 00:11:33.982 [job3] 00:11:33.982 filename=/dev/nvme0n4 00:11:33.982 Could not set queue depth (nvme0n1) 00:11:33.982 Could not set queue depth (nvme0n2) 00:11:33.982 Could not set queue depth (nvme0n3) 00:11:33.982 Could not set queue depth (nvme0n4) 00:11:34.241 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.241 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.241 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.241 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.241 fio-3.35 00:11:34.241 Starting 4 threads 00:11:37.523 12:25:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:37.524 12:25:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:37.524 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=83673088, buflen=4096 00:11:37.524 fio: pid=2731891, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:37.524 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=84172800, buflen=4096 00:11:37.524 fio: pid=2731890, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:37.524 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.524 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:37.782 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42577920, buflen=4096 00:11:37.782 fio: pid=2731888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:38.040 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.040 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:38.298 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53014528, buflen=4096 00:11:38.298 fio: pid=2731889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:38.298 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.298 12:25:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:38.298 00:11:38.298 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2731888: Wed Nov 20 12:25:43 2024 00:11:38.298 read: IOPS=7432, BW=29.0MiB/s (30.4MB/s)(105MiB/3603msec) 00:11:38.298 slat (usec): min=4, max=34621, avg=11.76, stdev=238.43 00:11:38.298 clat (usec): min=69, max=1508, avg=120.48, stdev=40.19 00:11:38.298 lat (usec): min=74, max=34794, avg=132.24, stdev=242.24 00:11:38.298 clat percentiles (usec): 00:11:38.298 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 90], 00:11:38.298 | 30.00th=[ 94], 40.00th=[ 98], 50.00th=[ 103], 60.00th=[ 113], 00:11:38.298 | 70.00th=[ 139], 80.00th=[ 155], 90.00th=[ 172], 95.00th=[ 212], 00:11:38.298 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 302], 99.95th=[ 310], 00:11:38.298 | 99.99th=[ 392] 00:11:38.298 bw ( KiB/s): min=22408, max=34024, per=28.63%, avg=28178.67, stdev=5236.06, samples=6 00:11:38.298 iops : min= 5602, max= 8506, avg=7044.67, stdev=1309.02, samples=6 00:11:38.298 lat (usec) : 100=45.13%, 250=54.54%, 500=0.32% 00:11:38.298 lat (msec) : 2=0.01% 00:11:38.298 cpu : usr=3.36%, sys=9.49%, ctx=26786, majf=0, minf=2 00:11:38.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.298 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.298 issued rwts: total=26780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.298 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2731889: Wed Nov 20 12:25:43 2024 00:11:38.298 read: IOPS=7432, BW=29.0MiB/s (30.4MB/s)(115MiB/3946msec) 00:11:38.298 slat (usec): min=4, max=15858, avg=10.96, stdev=164.14 00:11:38.298 clat (usec): min=69, max=28261, avg=121.51, stdev=201.61 00:11:38.298 lat (usec): min=73, max=28272, avg=132.47, stdev=260.42 00:11:38.298 clat percentiles (usec): 00:11:38.298 | 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:11:38.298 | 30.00th=[ 90], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 120], 00:11:38.298 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:11:38.299 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 277], 99.95th=[ 297], 00:11:38.299 | 99.99th=[ 9896] 00:11:38.299 bw ( KiB/s): min=22632, max=35889, per=29.08%, avg=28618.43, stdev=4466.81, samples=7 00:11:38.299 iops : min= 5658, max= 8972, avg=7154.57, stdev=1116.63, samples=7 00:11:38.299 lat (usec) : 100=48.07%, 250=51.65%, 500=0.26%, 750=0.01% 00:11:38.299 lat (msec) : 10=0.01%, 20=0.01%, 50=0.01% 00:11:38.299 cpu : usr=2.94%, sys=9.28%, ctx=29340, majf=0, minf=2 00:11:38.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 issued rwts: total=29328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.299 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2731890: Wed Nov 20 12:25:43 2024 00:11:38.299 read: IOPS=6277, BW=24.5MiB/s (25.7MB/s)(80.3MiB/3274msec) 00:11:38.299 slat (usec): min=4, max=13862, avg=11.12, stdev=130.85 00:11:38.299 clat (usec): min=87, max=30199, avg=145.75, stdev=212.71 00:11:38.299 lat (usec): min=93, max=30207, avg=156.88, stdev=249.82 00:11:38.299 clat percentiles (usec): 00:11:38.299 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 111], 00:11:38.299 | 30.00th=[ 115], 40.00th=[ 122], 50.00th=[ 149], 60.00th=[ 157], 00:11:38.299 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 215], 00:11:38.299 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 297], 99.95th=[ 314], 00:11:38.299 | 99.99th=[ 545] 00:11:38.299 bw ( KiB/s): min=22016, max=28928, per=25.10%, avg=24706.67, stdev=2440.18, samples=6 00:11:38.299 iops : min= 5504, max= 7232, avg=6176.67, stdev=610.05, samples=6 00:11:38.299 lat (usec) : 100=3.58%, 250=96.11%, 500=0.29%, 750=0.01% 00:11:38.299 lat (msec) : 50=0.01% 00:11:38.299 cpu : usr=3.06%, sys=8.49%, ctx=20557, majf=0, minf=1 00:11:38.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 issued rwts: total=20551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.299 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2731891: Wed Nov 20 12:25:43 2024 00:11:38.299 read: IOPS=6901, BW=27.0MiB/s (28.3MB/s)(79.8MiB/2960msec) 00:11:38.299 slat (nsec): min=4473, max=59412, avg=10731.67, stdev=4633.25 00:11:38.299 clat (usec): min=86, max=450, avg=130.60, stdev=34.85 00:11:38.299 lat (usec): min=95, max=457, avg=141.33, stdev=34.09 00:11:38.299 clat percentiles (usec): 00:11:38.299 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 106], 00:11:38.299 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 120], 00:11:38.299 | 70.00th=[ 133], 80.00th=[ 159], 90.00th=[ 182], 95.00th=[ 215], 00:11:38.299 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 273], 99.95th=[ 285], 00:11:38.299 | 99.99th=[ 322] 00:11:38.299 bw ( KiB/s): min=24568, max=31512, per=29.33%, avg=28862.40, stdev=3213.34, samples=5 00:11:38.299 iops : min= 6142, max= 7878, avg=7215.60, stdev=803.34, samples=5 00:11:38.299 lat (usec) : 100=4.90%, 250=94.78%, 500=0.31% 00:11:38.299 cpu : usr=3.89%, sys=10.00%, ctx=20434, majf=0, minf=1 00:11:38.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.299 issued rwts: total=20429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.299 00:11:38.299 Run status group 0 (all jobs): 00:11:38.299 READ: bw=96.1MiB/s (101MB/s), 24.5MiB/s-29.0MiB/s (25.7MB/s-30.4MB/s), io=379MiB (398MB), run=2960-3946msec 00:11:38.299 00:11:38.299 Disk stats (read/write): 00:11:38.299 nvme0n1: ios=24452/0, merge=0/0, ticks=2843/0, in_queue=2843, util=94.51% 00:11:38.299 nvme0n2: ios=28620/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.22% 00:11:38.299 nvme0n3: ios=19277/0, merge=0/0, ticks=2708/0, in_queue=2708, util=96.05% 00:11:38.299 nvme0n4: ios=20095/0, merge=0/0, ticks=2508/0, in_queue=2508, util=96.76% 00:11:38.557 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.557 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:38.815 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.815 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:39.381 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.381 12:25:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:39.639 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.639 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:39.898 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:39.898 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2731832 00:11:39.898 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:39.898 12:25:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:40.833 nvmf hotplug test: fio failed as expected 00:11:40.833 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:41.400 rmmod nvme_rdma 00:11:41.400 rmmod nvme_fabrics 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2730256 ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2730256 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2730256 ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2730256 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730256 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730256' 00:11:41.400 killing process with pid 2730256 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2730256 00:11:41.400 12:25:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2730256 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:41.660 00:11:41.660 real 0m25.108s 00:11:41.660 user 1m38.956s 00:11:41.660 sys 0m6.938s 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.660 ************************************ 00:11:41.660 END TEST nvmf_fio_target 00:11:41.660 ************************************ 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:41.660 ************************************ 00:11:41.660 START TEST nvmf_bdevio 00:11:41.660 ************************************ 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:41.660 * Looking for test storage... 00:11:41.660 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.660 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.920 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.921 --rc genhtml_branch_coverage=1 00:11:41.921 --rc genhtml_function_coverage=1 00:11:41.921 --rc genhtml_legend=1 00:11:41.921 --rc geninfo_all_blocks=1 00:11:41.921 --rc geninfo_unexecuted_blocks=1 00:11:41.921 00:11:41.921 ' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.921 --rc genhtml_branch_coverage=1 00:11:41.921 --rc genhtml_function_coverage=1 00:11:41.921 --rc genhtml_legend=1 00:11:41.921 --rc geninfo_all_blocks=1 00:11:41.921 --rc geninfo_unexecuted_blocks=1 00:11:41.921 00:11:41.921 ' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.921 --rc genhtml_branch_coverage=1 00:11:41.921 --rc genhtml_function_coverage=1 00:11:41.921 --rc genhtml_legend=1 00:11:41.921 --rc geninfo_all_blocks=1 00:11:41.921 --rc geninfo_unexecuted_blocks=1 00:11:41.921 00:11:41.921 ' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.921 --rc genhtml_branch_coverage=1 00:11:41.921 --rc genhtml_function_coverage=1 00:11:41.921 --rc genhtml_legend=1 00:11:41.921 --rc geninfo_all_blocks=1 00:11:41.921 --rc geninfo_unexecuted_blocks=1 00:11:41.921 00:11:41.921 ' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.921 12:25:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:44.459 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:44.459 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:44.459 Found net devices under 0000:83:00.0: mlx_0_0 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.459 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:44.460 Found net devices under 0000:83:00.1: mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:44.460 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.460 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:44.460 altname enp131s0f0np0 00:11:44.460 inet 192.168.100.8/24 scope global mlx_0_0 00:11:44.460 valid_lft forever preferred_lft forever 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:44.460 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:44.460 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:44.460 altname enp131s0f1np1 00:11:44.460 inet 192.168.100.9/24 scope global mlx_0_1 00:11:44.460 valid_lft forever preferred_lft forever 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:44.460 192.168.100.9' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:44.460 192.168.100.9' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:44.460 192.168.100.9' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.460 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2733887 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2733887 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2733887 ']' 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.461 12:25:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.461 [2024-11-20 12:25:49.864286] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:44.461 [2024-11-20 12:25:49.864400] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.461 [2024-11-20 12:25:49.991135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.461 [2024-11-20 12:25:50.102444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.461 [2024-11-20 12:25:50.102564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.461 [2024-11-20 12:25:50.102600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.461 [2024-11-20 12:25:50.102629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.461 [2024-11-20 12:25:50.102655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.461 [2024-11-20 12:25:50.105102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:44.461 [2024-11-20 12:25:50.105180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:44.461 [2024-11-20 12:25:50.105230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:44.461 [2024-11-20 12:25:50.105238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.719 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.719 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:44.719 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.719 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.719 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.720 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.720 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:44.720 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.720 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.720 [2024-11-20 12:25:50.393364] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10476e0/0x104bbd0) succeed. 00:11:44.720 [2024-11-20 12:25:50.409218] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1048d70/0x108d270) succeed. 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.003 Malloc0 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.003 [2024-11-20 12:25:50.628209] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:45.003 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:45.003 { 00:11:45.003 "params": { 00:11:45.003 "name": "Nvme$subsystem", 00:11:45.004 "trtype": "$TEST_TRANSPORT", 00:11:45.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:45.004 "adrfam": "ipv4", 00:11:45.004 "trsvcid": "$NVMF_PORT", 00:11:45.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:45.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:45.004 "hdgst": ${hdgst:-false}, 00:11:45.004 "ddgst": ${ddgst:-false} 00:11:45.004 }, 00:11:45.004 "method": "bdev_nvme_attach_controller" 00:11:45.004 } 00:11:45.004 EOF 00:11:45.004 )") 00:11:45.004 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:45.004 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:45.004 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:45.004 12:25:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:45.004 "params": { 00:11:45.004 "name": "Nvme1", 00:11:45.004 "trtype": "rdma", 00:11:45.004 "traddr": "192.168.100.8", 00:11:45.004 "adrfam": "ipv4", 00:11:45.004 "trsvcid": "4420", 00:11:45.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:45.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:45.004 "hdgst": false, 00:11:45.004 "ddgst": false 00:11:45.004 }, 00:11:45.004 "method": "bdev_nvme_attach_controller" 00:11:45.004 }' 00:11:45.004 [2024-11-20 12:25:50.684143] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:45.004 [2024-11-20 12:25:50.684234] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734006 ] 00:11:45.004 [2024-11-20 12:25:50.757032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:45.266 [2024-11-20 12:25:50.824382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.266 [2024-11-20 12:25:50.824433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.266 [2024-11-20 12:25:50.824438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.525 I/O targets: 00:11:45.525 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:45.525 00:11:45.525 00:11:45.525 CUnit - A unit testing framework for C - Version 2.1-3 00:11:45.525 http://cunit.sourceforge.net/ 00:11:45.525 00:11:45.525 00:11:45.525 Suite: bdevio tests on: Nvme1n1 00:11:45.525 Test: blockdev write read block ...passed 00:11:45.525 Test: blockdev write zeroes read block ...passed 00:11:45.525 Test: blockdev write zeroes read no split ...passed 00:11:45.525 Test: blockdev write zeroes read split ...passed 00:11:45.525 Test: blockdev write zeroes read split partial ...passed 00:11:45.525 Test: blockdev reset ...[2024-11-20 12:25:51.077731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:45.525 [2024-11-20 12:25:51.107549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:45.525 [2024-11-20 12:25:51.137590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:45.525 passed 00:11:45.525 Test: blockdev write read 8 blocks ...passed 00:11:45.525 Test: blockdev write read size > 128k ...passed 00:11:45.525 Test: blockdev write read invalid size ...passed 00:11:45.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:45.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:45.525 Test: blockdev write read max offset ...passed 00:11:45.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:45.525 Test: blockdev writev readv 8 blocks ...passed 00:11:45.525 Test: blockdev writev readv 30 x 1block ...passed 00:11:45.525 Test: blockdev writev readv block ...passed 00:11:45.525 Test: blockdev writev readv size > 128k ...passed 00:11:45.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:45.525 Test: blockdev comparev and writev ...[2024-11-20 12:25:51.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.525 [2024-11-20 12:25:51.142044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:45.525 [2024-11-20 12:25:51.142065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.525 [2024-11-20 12:25:51.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:45.525 [2024-11-20 12:25:51.142312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.525 [2024-11-20 12:25:51.142336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:45.525 [2024-11-20 12:25:51.142354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.525 [2024-11-20 12:25:51.142369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:45.525 [2024-11-20 12:25:51.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.526 [2024-11-20 12:25:51.142594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.142612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.526 [2024-11-20 12:25:51.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.142857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.526 [2024-11-20 12:25:51.142879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.142896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.526 [2024-11-20 12:25:51.142911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:45.526 passed 00:11:45.526 Test: blockdev nvme passthru rw ...passed 00:11:45.526 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:25:51.143321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:45.526 [2024-11-20 12:25:51.143346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.143426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:45.526 [2024-11-20 12:25:51.143453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.143532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:45.526 [2024-11-20 12:25:51.143554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:45.526 [2024-11-20 12:25:51.143617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:45.526 [2024-11-20 12:25:51.143637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:45.526 passed 00:11:45.526 Test: blockdev nvme admin passthru ...passed 00:11:45.526 Test: blockdev copy ...passed 00:11:45.526 00:11:45.526 Run Summary: Type Total Ran Passed Failed Inactive 00:11:45.526 suites 1 1 n/a 0 0 00:11:45.526 tests 23 23 23 0 0 00:11:45.526 asserts 152 152 152 0 n/a 00:11:45.526 00:11:45.526 Elapsed time = 0.221 seconds 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:45.785 rmmod nvme_rdma 00:11:45.785 rmmod nvme_fabrics 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2733887 ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2733887 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2733887 ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2733887 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733887 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733887' 00:11:45.785 killing process with pid 2733887 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2733887 00:11:45.785 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2733887 00:11:46.045 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.045 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:46.045 00:11:46.045 real 0m4.476s 00:11:46.045 user 0m8.623s 00:11:46.045 sys 0m2.378s 00:11:46.045 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.045 12:25:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:46.045 ************************************ 00:11:46.045 END TEST nvmf_bdevio 00:11:46.045 ************************************ 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:46.305 00:11:46.305 real 3m31.133s 00:11:46.305 user 11m16.462s 00:11:46.305 sys 0m52.929s 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.305 ************************************ 00:11:46.305 END TEST nvmf_target_core 00:11:46.305 ************************************ 00:11:46.305 12:25:51 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:46.305 12:25:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.305 12:25:51 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.305 12:25:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:46.305 ************************************ 00:11:46.305 START TEST nvmf_target_extra 00:11:46.305 ************************************ 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:46.305 * Looking for test storage... 00:11:46.305 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.305 12:25:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.305 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.306 --rc genhtml_branch_coverage=1 00:11:46.306 --rc genhtml_function_coverage=1 00:11:46.306 --rc genhtml_legend=1 00:11:46.306 --rc geninfo_all_blocks=1 00:11:46.306 --rc geninfo_unexecuted_blocks=1 00:11:46.306 00:11:46.306 ' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.306 --rc genhtml_branch_coverage=1 00:11:46.306 --rc genhtml_function_coverage=1 00:11:46.306 --rc genhtml_legend=1 00:11:46.306 --rc geninfo_all_blocks=1 00:11:46.306 --rc geninfo_unexecuted_blocks=1 00:11:46.306 00:11:46.306 ' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.306 --rc genhtml_branch_coverage=1 00:11:46.306 --rc genhtml_function_coverage=1 00:11:46.306 --rc genhtml_legend=1 00:11:46.306 --rc geninfo_all_blocks=1 00:11:46.306 --rc geninfo_unexecuted_blocks=1 00:11:46.306 00:11:46.306 ' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.306 --rc genhtml_branch_coverage=1 00:11:46.306 --rc genhtml_function_coverage=1 00:11:46.306 --rc genhtml_legend=1 00:11:46.306 --rc geninfo_all_blocks=1 00:11:46.306 --rc geninfo_unexecuted_blocks=1 00:11:46.306 00:11:46.306 ' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.306 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.306 ************************************ 00:11:46.306 START TEST nvmf_example 00:11:46.306 ************************************ 00:11:46.306 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.567 * Looking for test storage... 00:11:46.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.567 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.568 --rc genhtml_branch_coverage=1 00:11:46.568 --rc genhtml_function_coverage=1 00:11:46.568 --rc genhtml_legend=1 00:11:46.568 --rc geninfo_all_blocks=1 00:11:46.568 --rc geninfo_unexecuted_blocks=1 00:11:46.568 00:11:46.568 ' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.568 --rc genhtml_branch_coverage=1 00:11:46.568 --rc genhtml_function_coverage=1 00:11:46.568 --rc genhtml_legend=1 00:11:46.568 --rc geninfo_all_blocks=1 00:11:46.568 --rc geninfo_unexecuted_blocks=1 00:11:46.568 00:11:46.568 ' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.568 --rc genhtml_branch_coverage=1 00:11:46.568 --rc genhtml_function_coverage=1 00:11:46.568 --rc genhtml_legend=1 00:11:46.568 --rc geninfo_all_blocks=1 00:11:46.568 --rc geninfo_unexecuted_blocks=1 00:11:46.568 00:11:46.568 ' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.568 --rc genhtml_branch_coverage=1 00:11:46.568 --rc genhtml_function_coverage=1 00:11:46.568 --rc genhtml_legend=1 00:11:46.568 --rc geninfo_all_blocks=1 00:11:46.568 --rc geninfo_unexecuted_blocks=1 00:11:46.568 00:11:46.568 ' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.568 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.568 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.569 12:25:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:11:49.105 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:11:49.105 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:11:49.105 Found net devices under 0000:83:00.0: mlx_0_0 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:11:49.105 Found net devices under 0000:83:00.1: mlx_0_1 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.105 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:49.106 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.106 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:11:49.106 altname enp131s0f0np0 00:11:49.106 inet 192.168.100.8/24 scope global mlx_0_0 00:11:49.106 valid_lft forever preferred_lft forever 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:49.106 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.106 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:11:49.106 altname enp131s0f1np1 00:11:49.106 inet 192.168.100.9/24 scope global mlx_0_1 00:11:49.106 valid_lft forever preferred_lft forever 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:49.106 192.168.100.9' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:49.106 192.168.100.9' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:49.106 192.168.100.9' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2735542 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2735542 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2735542 ']' 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.106 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.365 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.365 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:49.365 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:49.365 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.365 12:25:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.365 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:49.365 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.365 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:49.624 12:25:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:01.831 Initializing NVMe Controllers 00:12:01.831 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.831 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:01.831 Initialization complete. Launching workers. 00:12:01.831 ======================================================== 00:12:01.831 Latency(us) 00:12:01.831 Device Information : IOPS MiB/s Average min max 00:12:01.831 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17628.30 68.86 3632.17 1087.88 15969.10 00:12:01.831 ======================================================== 00:12:01.831 Total : 17628.30 68.86 3632.17 1087.88 15969.10 00:12:01.831 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.831 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:01.831 rmmod nvme_rdma 00:12:01.831 rmmod nvme_fabrics 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2735542 ']' 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2735542 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2735542 ']' 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2735542 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2735542 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2735542' 00:12:01.832 killing process with pid 2735542 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2735542 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2735542 00:12:01.832 nvmf threads initialize successfully 00:12:01.832 bdev subsystem init successfully 00:12:01.832 created a nvmf target service 00:12:01.832 create targets's poll groups done 00:12:01.832 all subsystems of target started 00:12:01.832 nvmf target is running 00:12:01.832 all subsystems of target stopped 00:12:01.832 destroy targets's poll groups done 00:12:01.832 destroyed the nvmf target service 00:12:01.832 bdev subsystem finish successfully 00:12:01.832 nvmf threads destroy successfully 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.832 12:26:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.832 00:12:01.832 real 0m14.952s 00:12:01.832 user 0m49.153s 00:12:01.832 sys 0m2.291s 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.832 ************************************ 00:12:01.832 END TEST nvmf_example 00:12:01.832 ************************************ 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.832 ************************************ 00:12:01.832 START TEST nvmf_filesystem 00:12:01.832 ************************************ 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:01.832 * Looking for test storage... 00:12:01.832 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.832 --rc genhtml_branch_coverage=1 00:12:01.832 --rc genhtml_function_coverage=1 00:12:01.832 --rc genhtml_legend=1 00:12:01.832 --rc geninfo_all_blocks=1 00:12:01.832 --rc geninfo_unexecuted_blocks=1 00:12:01.832 00:12:01.832 ' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.832 --rc genhtml_branch_coverage=1 00:12:01.832 --rc genhtml_function_coverage=1 00:12:01.832 --rc genhtml_legend=1 00:12:01.832 --rc geninfo_all_blocks=1 00:12:01.832 --rc geninfo_unexecuted_blocks=1 00:12:01.832 00:12:01.832 ' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.832 --rc genhtml_branch_coverage=1 00:12:01.832 --rc genhtml_function_coverage=1 00:12:01.832 --rc genhtml_legend=1 00:12:01.832 --rc geninfo_all_blocks=1 00:12:01.832 --rc geninfo_unexecuted_blocks=1 00:12:01.832 00:12:01.832 ' 00:12:01.832 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.832 --rc genhtml_branch_coverage=1 00:12:01.832 --rc genhtml_function_coverage=1 00:12:01.832 --rc genhtml_legend=1 00:12:01.832 --rc geninfo_all_blocks=1 00:12:01.833 --rc geninfo_unexecuted_blocks=1 00:12:01.833 00:12:01.833 ' 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:01.833 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:01.834 #define SPDK_CONFIG_H 00:12:01.834 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:01.834 #define SPDK_CONFIG_APPS 1 00:12:01.834 #define SPDK_CONFIG_ARCH native 00:12:01.834 #undef SPDK_CONFIG_ASAN 00:12:01.834 #undef SPDK_CONFIG_AVAHI 00:12:01.834 #undef SPDK_CONFIG_CET 00:12:01.834 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:01.834 #define SPDK_CONFIG_COVERAGE 1 00:12:01.834 #define SPDK_CONFIG_CROSS_PREFIX 00:12:01.834 #undef SPDK_CONFIG_CRYPTO 00:12:01.834 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:01.834 #undef SPDK_CONFIG_CUSTOMOCF 00:12:01.834 #undef SPDK_CONFIG_DAOS 00:12:01.834 #define SPDK_CONFIG_DAOS_DIR 00:12:01.834 #define SPDK_CONFIG_DEBUG 1 00:12:01.834 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:01.834 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:01.834 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:01.834 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:01.834 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:01.834 #undef SPDK_CONFIG_DPDK_UADK 00:12:01.834 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:01.834 #define SPDK_CONFIG_EXAMPLES 1 00:12:01.834 #undef SPDK_CONFIG_FC 00:12:01.834 #define SPDK_CONFIG_FC_PATH 00:12:01.834 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:01.834 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:01.834 #define SPDK_CONFIG_FSDEV 1 00:12:01.834 #undef SPDK_CONFIG_FUSE 00:12:01.834 #undef SPDK_CONFIG_FUZZER 00:12:01.834 #define SPDK_CONFIG_FUZZER_LIB 00:12:01.834 #undef SPDK_CONFIG_GOLANG 00:12:01.834 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:01.834 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:01.834 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:01.834 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:01.834 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:01.834 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:01.834 #undef SPDK_CONFIG_HAVE_LZ4 00:12:01.834 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:01.834 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:01.834 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:01.834 #define SPDK_CONFIG_IDXD 1 00:12:01.834 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:01.834 #undef SPDK_CONFIG_IPSEC_MB 00:12:01.834 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:01.834 #define SPDK_CONFIG_ISAL 1 00:12:01.834 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:01.834 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:01.834 #define SPDK_CONFIG_LIBDIR 00:12:01.834 #undef SPDK_CONFIG_LTO 00:12:01.834 #define SPDK_CONFIG_MAX_LCORES 128 00:12:01.834 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:01.834 #define SPDK_CONFIG_NVME_CUSE 1 00:12:01.834 #undef SPDK_CONFIG_OCF 00:12:01.834 #define SPDK_CONFIG_OCF_PATH 00:12:01.834 #define SPDK_CONFIG_OPENSSL_PATH 00:12:01.834 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:01.834 #define SPDK_CONFIG_PGO_DIR 00:12:01.834 #undef SPDK_CONFIG_PGO_USE 00:12:01.834 #define SPDK_CONFIG_PREFIX /usr/local 00:12:01.834 #undef SPDK_CONFIG_RAID5F 00:12:01.834 #undef SPDK_CONFIG_RBD 00:12:01.834 #define SPDK_CONFIG_RDMA 1 00:12:01.834 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:01.834 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:01.834 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:01.834 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:01.834 #define SPDK_CONFIG_SHARED 1 00:12:01.834 #undef SPDK_CONFIG_SMA 00:12:01.834 #define SPDK_CONFIG_TESTS 1 00:12:01.834 #undef SPDK_CONFIG_TSAN 00:12:01.834 #define SPDK_CONFIG_UBLK 1 00:12:01.834 #define SPDK_CONFIG_UBSAN 1 00:12:01.834 #undef SPDK_CONFIG_UNIT_TESTS 00:12:01.834 #undef SPDK_CONFIG_URING 00:12:01.834 #define SPDK_CONFIG_URING_PATH 00:12:01.834 #undef SPDK_CONFIG_URING_ZNS 00:12:01.834 #undef SPDK_CONFIG_USDT 00:12:01.834 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:01.834 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:01.834 #undef SPDK_CONFIG_VFIO_USER 00:12:01.834 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:01.834 #define SPDK_CONFIG_VHOST 1 00:12:01.834 #define SPDK_CONFIG_VIRTIO 1 00:12:01.834 #undef SPDK_CONFIG_VTUNE 00:12:01.834 #define SPDK_CONFIG_VTUNE_DIR 00:12:01.834 #define SPDK_CONFIG_WERROR 1 00:12:01.834 #define SPDK_CONFIG_WPDK_DIR 00:12:01.834 #undef SPDK_CONFIG_XNVME 00:12:01.834 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:01.834 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:01.835 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.836 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j16 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:01.837 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2736676 ]] 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2736676 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.FOMfW8 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FOMfW8/tests/target /tmp/spdk.FOMfW8 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=46657187840 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=53583609856 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6926422016 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/sda3 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=xfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=139803742208 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=232985423872 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=93181681664 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=26731200512 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=26791804928 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=60604416 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10694610944 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10716725248 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22114304 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=26791321600 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=26791804928 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=5358346240 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5358358528 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:01.838 * Looking for test storage... 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:01.838 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=46657187840 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9141014528 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.839 --rc genhtml_branch_coverage=1 00:12:01.839 --rc genhtml_function_coverage=1 00:12:01.839 --rc genhtml_legend=1 00:12:01.839 --rc geninfo_all_blocks=1 00:12:01.839 --rc geninfo_unexecuted_blocks=1 00:12:01.839 00:12:01.839 ' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.839 --rc genhtml_branch_coverage=1 00:12:01.839 --rc genhtml_function_coverage=1 00:12:01.839 --rc genhtml_legend=1 00:12:01.839 --rc geninfo_all_blocks=1 00:12:01.839 --rc geninfo_unexecuted_blocks=1 00:12:01.839 00:12:01.839 ' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.839 --rc genhtml_branch_coverage=1 00:12:01.839 --rc genhtml_function_coverage=1 00:12:01.839 --rc genhtml_legend=1 00:12:01.839 --rc geninfo_all_blocks=1 00:12:01.839 --rc geninfo_unexecuted_blocks=1 00:12:01.839 00:12:01.839 ' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.839 --rc genhtml_branch_coverage=1 00:12:01.839 --rc genhtml_function_coverage=1 00:12:01.839 --rc genhtml_legend=1 00:12:01.839 --rc geninfo_all_blocks=1 00:12:01.839 --rc geninfo_unexecuted_blocks=1 00:12:01.839 00:12:01.839 ' 00:12:01.839 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.840 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.840 12:26:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:04.437 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:04.437 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:04.437 Found net devices under 0000:83:00.0: mlx_0_0 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.437 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:04.438 Found net devices under 0000:83:00.1: mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:04.438 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.438 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:04.438 altname enp131s0f0np0 00:12:04.438 inet 192.168.100.8/24 scope global mlx_0_0 00:12:04.438 valid_lft forever preferred_lft forever 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:04.438 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.438 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:04.438 altname enp131s0f1np1 00:12:04.438 inet 192.168.100.9/24 scope global mlx_0_1 00:12:04.438 valid_lft forever preferred_lft forever 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:04.438 192.168.100.9' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:04.438 192.168.100.9' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:04.438 192.168.100.9' 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:04.438 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.439 ************************************ 00:12:04.439 START TEST nvmf_filesystem_no_in_capsule 00:12:04.439 ************************************ 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2737976 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2737976 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2737976 ']' 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.439 12:26:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.439 [2024-11-20 12:26:10.010382] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:04.439 [2024-11-20 12:26:10.010503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.439 [2024-11-20 12:26:10.086433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.439 [2024-11-20 12:26:10.150880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.439 [2024-11-20 12:26:10.150940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.439 [2024-11-20 12:26:10.150956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.439 [2024-11-20 12:26:10.150969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.439 [2024-11-20 12:26:10.150980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.439 [2024-11-20 12:26:10.152327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.439 [2024-11-20 12:26:10.152406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.439 [2024-11-20 12:26:10.152431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.439 [2024-11-20 12:26:10.152434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.698 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.698 [2024-11-20 12:26:10.366955] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:04.698 [2024-11-20 12:26:10.396438] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xac6df0/0xacb2e0) succeed. 00:12:04.698 [2024-11-20 12:26:10.412641] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xac8480/0xb0c980) succeed. 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.956 Malloc1 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.956 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.957 [2024-11-20 12:26:10.706484] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.957 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.215 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.215 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:05.215 { 00:12:05.215 "name": "Malloc1", 00:12:05.215 "aliases": [ 00:12:05.215 "4b7c280c-a76c-4051-8d04-9111ce659069" 00:12:05.215 ], 00:12:05.215 "product_name": "Malloc disk", 00:12:05.215 "block_size": 512, 00:12:05.215 "num_blocks": 1048576, 00:12:05.215 "uuid": "4b7c280c-a76c-4051-8d04-9111ce659069", 00:12:05.215 "assigned_rate_limits": { 00:12:05.215 "rw_ios_per_sec": 0, 00:12:05.215 "rw_mbytes_per_sec": 0, 00:12:05.215 "r_mbytes_per_sec": 0, 00:12:05.215 "w_mbytes_per_sec": 0 00:12:05.215 }, 00:12:05.215 "claimed": true, 00:12:05.215 "claim_type": "exclusive_write", 00:12:05.215 "zoned": false, 00:12:05.215 "supported_io_types": { 00:12:05.215 "read": true, 00:12:05.215 "write": true, 00:12:05.215 "unmap": true, 00:12:05.215 "flush": true, 00:12:05.215 "reset": true, 00:12:05.215 "nvme_admin": false, 00:12:05.215 "nvme_io": false, 00:12:05.215 "nvme_io_md": false, 00:12:05.215 "write_zeroes": true, 00:12:05.215 "zcopy": true, 00:12:05.215 "get_zone_info": false, 00:12:05.215 "zone_management": false, 00:12:05.216 "zone_append": false, 00:12:05.216 "compare": false, 00:12:05.216 "compare_and_write": false, 00:12:05.216 "abort": true, 00:12:05.216 "seek_hole": false, 00:12:05.216 "seek_data": false, 00:12:05.216 "copy": true, 00:12:05.216 "nvme_iov_md": false 00:12:05.216 }, 00:12:05.216 "memory_domains": [ 00:12:05.216 { 00:12:05.216 "dma_device_id": "system", 00:12:05.216 "dma_device_type": 1 00:12:05.216 }, 00:12:05.216 { 00:12:05.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.216 "dma_device_type": 2 00:12:05.216 } 00:12:05.216 ], 00:12:05.216 "driver_specific": {} 00:12:05.216 } 00:12:05.216 ]' 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:05.216 12:26:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:06.151 12:26:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.151 12:26:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.151 12:26:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.151 12:26:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.151 12:26:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:08.679 12:26:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:08.679 12:26:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.614 ************************************ 00:12:09.614 START TEST filesystem_ext4 00:12:09.614 ************************************ 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:09.614 mke2fs 1.47.0 (5-Feb-2023) 00:12:09.614 Discarding device blocks: 0/522240 done 00:12:09.614 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:09.614 Filesystem UUID: c31ed933-1d63-494b-831a-5c38fe0a5319 00:12:09.614 Superblock backups stored on blocks: 00:12:09.614 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:09.614 00:12:09.614 Allocating group tables: 0/64 done 00:12:09.614 Writing inode tables: 0/64 done 00:12:09.614 Creating journal (8192 blocks): done 00:12:09.614 Writing superblocks and filesystem accounting information: 0/64 done 00:12:09.614 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:09.614 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.873 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.873 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:09.873 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.873 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:09.873 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2737976 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.874 00:12:09.874 real 0m0.190s 00:12:09.874 user 0m0.021s 00:12:09.874 sys 0m0.055s 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:09.874 ************************************ 00:12:09.874 END TEST filesystem_ext4 00:12:09.874 ************************************ 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.874 ************************************ 00:12:09.874 START TEST filesystem_btrfs 00:12:09.874 ************************************ 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:09.874 btrfs-progs v6.8.1 00:12:09.874 See https://btrfs.readthedocs.io for more information. 00:12:09.874 00:12:09.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:09.874 NOTE: several default settings have changed in version 5.15, please make sure 00:12:09.874 this does not affect your deployments: 00:12:09.874 - DUP for metadata (-m dup) 00:12:09.874 - enabled no-holes (-O no-holes) 00:12:09.874 - enabled free-space-tree (-R free-space-tree) 00:12:09.874 00:12:09.874 Label: (null) 00:12:09.874 UUID: cfb26ab7-23aa-45bd-b1b7-7af8a3aa79a5 00:12:09.874 Node size: 16384 00:12:09.874 Sector size: 4096 (CPU page size: 4096) 00:12:09.874 Filesystem size: 510.00MiB 00:12:09.874 Block group profiles: 00:12:09.874 Data: single 8.00MiB 00:12:09.874 Metadata: DUP 32.00MiB 00:12:09.874 System: DUP 8.00MiB 00:12:09.874 SSD detected: yes 00:12:09.874 Zoned device: no 00:12:09.874 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:09.874 Checksum: crc32c 00:12:09.874 Number of devices: 1 00:12:09.874 Devices: 00:12:09.874 ID SIZE PATH 00:12:09.874 1 510.00MiB /dev/nvme0n1p1 00:12:09.874 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:09.874 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2737976 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.133 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.133 00:12:10.133 real 0m0.220s 00:12:10.134 user 0m0.022s 00:12:10.134 sys 0m0.093s 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:10.134 ************************************ 00:12:10.134 END TEST filesystem_btrfs 00:12:10.134 ************************************ 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.134 ************************************ 00:12:10.134 START TEST filesystem_xfs 00:12:10.134 ************************************ 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:10.134 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:10.134 = sectsz=512 attr=2, projid32bit=1 00:12:10.134 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:10.134 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:10.134 data = bsize=4096 blocks=130560, imaxpct=25 00:12:10.134 = sunit=0 swidth=0 blks 00:12:10.134 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:10.134 log =internal log bsize=4096 blocks=16384, version=2 00:12:10.134 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:10.134 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.134 Discarding blocks...Done. 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.134 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2737976 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.393 00:12:10.393 real 0m0.207s 00:12:10.393 user 0m0.024s 00:12:10.393 sys 0m0.048s 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:10.393 ************************************ 00:12:10.393 END TEST filesystem_xfs 00:12:10.393 ************************************ 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:10.393 12:26:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.328 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2737976 00:12:11.329 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2737976 ']' 00:12:11.329 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2737976 00:12:11.329 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:11.329 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.329 12:26:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737976 00:12:11.329 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.329 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.329 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737976' 00:12:11.329 killing process with pid 2737976 00:12:11.329 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2737976 00:12:11.329 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2737976 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:11.897 00:12:11.897 real 0m7.464s 00:12:11.897 user 0m28.863s 00:12:11.897 sys 0m0.946s 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.897 ************************************ 00:12:11.897 END TEST nvmf_filesystem_no_in_capsule 00:12:11.897 ************************************ 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.897 ************************************ 00:12:11.897 START TEST nvmf_filesystem_in_capsule 00:12:11.897 ************************************ 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2738855 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2738855 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2738855 ']' 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.897 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.897 [2024-11-20 12:26:17.510821] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:11.897 [2024-11-20 12:26:17.510924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.897 [2024-11-20 12:26:17.585578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.897 [2024-11-20 12:26:17.649965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.897 [2024-11-20 12:26:17.650025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.897 [2024-11-20 12:26:17.650040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.897 [2024-11-20 12:26:17.650053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.897 [2024-11-20 12:26:17.650072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.898 [2024-11-20 12:26:17.654540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.898 [2024-11-20 12:26:17.654635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.898 [2024-11-20 12:26:17.654719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.898 [2024-11-20 12:26:17.654751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.156 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.156 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:12.156 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.157 12:26:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.157 [2024-11-20 12:26:17.856710] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1217df0/0x121c2e0) succeed. 00:12:12.157 [2024-11-20 12:26:17.872053] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1219480/0x125d980) succeed. 00:12:12.415 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.415 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.415 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.415 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.674 Malloc1 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.674 [2024-11-20 12:26:18.209845] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:12.674 { 00:12:12.674 "name": "Malloc1", 00:12:12.674 "aliases": [ 00:12:12.674 "69ccb35a-fe76-49e1-b4cd-87ff31fb2c78" 00:12:12.674 ], 00:12:12.674 "product_name": "Malloc disk", 00:12:12.674 "block_size": 512, 00:12:12.674 "num_blocks": 1048576, 00:12:12.674 "uuid": "69ccb35a-fe76-49e1-b4cd-87ff31fb2c78", 00:12:12.674 "assigned_rate_limits": { 00:12:12.674 "rw_ios_per_sec": 0, 00:12:12.674 "rw_mbytes_per_sec": 0, 00:12:12.674 "r_mbytes_per_sec": 0, 00:12:12.674 "w_mbytes_per_sec": 0 00:12:12.674 }, 00:12:12.674 "claimed": true, 00:12:12.674 "claim_type": "exclusive_write", 00:12:12.674 "zoned": false, 00:12:12.674 "supported_io_types": { 00:12:12.674 "read": true, 00:12:12.674 "write": true, 00:12:12.674 "unmap": true, 00:12:12.674 "flush": true, 00:12:12.674 "reset": true, 00:12:12.674 "nvme_admin": false, 00:12:12.674 "nvme_io": false, 00:12:12.674 "nvme_io_md": false, 00:12:12.674 "write_zeroes": true, 00:12:12.674 "zcopy": true, 00:12:12.674 "get_zone_info": false, 00:12:12.674 "zone_management": false, 00:12:12.674 "zone_append": false, 00:12:12.674 "compare": false, 00:12:12.674 "compare_and_write": false, 00:12:12.674 "abort": true, 00:12:12.674 "seek_hole": false, 00:12:12.674 "seek_data": false, 00:12:12.674 "copy": true, 00:12:12.674 "nvme_iov_md": false 00:12:12.674 }, 00:12:12.674 "memory_domains": [ 00:12:12.674 { 00:12:12.674 "dma_device_id": "system", 00:12:12.674 "dma_device_type": 1 00:12:12.674 }, 00:12:12.674 { 00:12:12.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.674 "dma_device_type": 2 00:12:12.674 } 00:12:12.674 ], 00:12:12.674 "driver_specific": {} 00:12:12.674 } 00:12:12.674 ]' 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:12.674 12:26:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:13.607 12:26:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.607 12:26:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.607 12:26:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.607 12:26:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.607 12:26:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:16.136 12:26:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 ************************************ 00:12:17.074 START TEST filesystem_in_capsule_ext4 00:12:17.074 ************************************ 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:17.074 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:17.075 mke2fs 1.47.0 (5-Feb-2023) 00:12:17.075 Discarding device blocks: 0/522240 done 00:12:17.075 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:17.075 Filesystem UUID: 792ee422-f615-47b4-b282-fff7fe75cf45 00:12:17.075 Superblock backups stored on blocks: 00:12:17.075 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:17.075 00:12:17.075 Allocating group tables: 0/64 done 00:12:17.075 Writing inode tables: 0/64 done 00:12:17.075 Creating journal (8192 blocks): done 00:12:17.075 Writing superblocks and filesystem accounting information: 0/64 done 00:12:17.075 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:17.075 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2738855 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.349 00:12:17.349 real 0m0.184s 00:12:17.349 user 0m0.020s 00:12:17.349 sys 0m0.054s 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.349 ************************************ 00:12:17.349 END TEST filesystem_in_capsule_ext4 00:12:17.349 ************************************ 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.349 ************************************ 00:12:17.349 START TEST filesystem_in_capsule_btrfs 00:12:17.349 ************************************ 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:17.349 12:26:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.349 btrfs-progs v6.8.1 00:12:17.349 See https://btrfs.readthedocs.io for more information. 00:12:17.349 00:12:17.349 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.349 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.349 this does not affect your deployments: 00:12:17.349 - DUP for metadata (-m dup) 00:12:17.349 - enabled no-holes (-O no-holes) 00:12:17.349 - enabled free-space-tree (-R free-space-tree) 00:12:17.349 00:12:17.349 Label: (null) 00:12:17.349 UUID: 387b45bc-a684-4443-933c-89a9627d98c5 00:12:17.349 Node size: 16384 00:12:17.349 Sector size: 4096 (CPU page size: 4096) 00:12:17.349 Filesystem size: 510.00MiB 00:12:17.349 Block group profiles: 00:12:17.349 Data: single 8.00MiB 00:12:17.349 Metadata: DUP 32.00MiB 00:12:17.349 System: DUP 8.00MiB 00:12:17.349 SSD detected: yes 00:12:17.349 Zoned device: no 00:12:17.349 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.349 Checksum: crc32c 00:12:17.349 Number of devices: 1 00:12:17.349 Devices: 00:12:17.349 ID SIZE PATH 00:12:17.349 1 510.00MiB /dev/nvme0n1p1 00:12:17.349 00:12:17.349 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.349 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.607 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.607 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:17.607 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.607 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:17.607 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2738855 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.608 00:12:17.608 real 0m0.219s 00:12:17.608 user 0m0.022s 00:12:17.608 sys 0m0.092s 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.608 ************************************ 00:12:17.608 END TEST filesystem_in_capsule_btrfs 00:12:17.608 ************************************ 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.608 ************************************ 00:12:17.608 START TEST filesystem_in_capsule_xfs 00:12:17.608 ************************************ 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:17.608 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:17.608 = sectsz=512 attr=2, projid32bit=1 00:12:17.608 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:17.608 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:17.608 data = bsize=4096 blocks=130560, imaxpct=25 00:12:17.608 = sunit=0 swidth=0 blks 00:12:17.608 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:17.608 log =internal log bsize=4096 blocks=16384, version=2 00:12:17.608 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:17.608 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:17.608 Discarding blocks...Done. 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2738855 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.608 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.866 00:12:17.866 real 0m0.178s 00:12:17.866 user 0m0.014s 00:12:17.866 sys 0m0.056s 00:12:17.866 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.866 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.866 ************************************ 00:12:17.866 END TEST filesystem_in_capsule_xfs 00:12:17.866 ************************************ 00:12:17.866 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.866 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.866 12:26:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2738855 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2738855 ']' 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2738855 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738855 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738855' 00:12:18.798 killing process with pid 2738855 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2738855 00:12:18.798 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2738855 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:19.364 00:12:19.364 real 0m7.432s 00:12:19.364 user 0m28.643s 00:12:19.364 sys 0m0.949s 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 END TEST nvmf_filesystem_in_capsule 00:12:19.364 ************************************ 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:19.364 rmmod nvme_rdma 00:12:19.364 rmmod nvme_fabrics 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:19.364 00:12:19.364 real 0m17.920s 00:12:19.364 user 0m58.600s 00:12:19.364 sys 0m3.926s 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 END TEST nvmf_filesystem 00:12:19.364 ************************************ 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 START TEST nvmf_target_discovery 00:12:19.364 ************************************ 00:12:19.364 12:26:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:19.364 * Looking for test storage... 00:12:19.364 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:19.364 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.364 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.364 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.624 --rc genhtml_branch_coverage=1 00:12:19.624 --rc genhtml_function_coverage=1 00:12:19.624 --rc genhtml_legend=1 00:12:19.624 --rc geninfo_all_blocks=1 00:12:19.624 --rc geninfo_unexecuted_blocks=1 00:12:19.624 00:12:19.624 ' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.624 --rc genhtml_branch_coverage=1 00:12:19.624 --rc genhtml_function_coverage=1 00:12:19.624 --rc genhtml_legend=1 00:12:19.624 --rc geninfo_all_blocks=1 00:12:19.624 --rc geninfo_unexecuted_blocks=1 00:12:19.624 00:12:19.624 ' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.624 --rc genhtml_branch_coverage=1 00:12:19.624 --rc genhtml_function_coverage=1 00:12:19.624 --rc genhtml_legend=1 00:12:19.624 --rc geninfo_all_blocks=1 00:12:19.624 --rc geninfo_unexecuted_blocks=1 00:12:19.624 00:12:19.624 ' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.624 --rc genhtml_branch_coverage=1 00:12:19.624 --rc genhtml_function_coverage=1 00:12:19.624 --rc genhtml_legend=1 00:12:19.624 --rc geninfo_all_blocks=1 00:12:19.624 --rc geninfo_unexecuted_blocks=1 00:12:19.624 00:12:19.624 ' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.624 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.625 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.625 12:26:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:22.159 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:22.159 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:22.159 Found net devices under 0000:83:00.0: mlx_0_0 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.159 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:22.160 Found net devices under 0000:83:00.1: mlx_0_1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:22.160 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.160 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:22.160 altname enp131s0f0np0 00:12:22.160 inet 192.168.100.8/24 scope global mlx_0_0 00:12:22.160 valid_lft forever preferred_lft forever 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:22.160 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.160 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:22.160 altname enp131s0f1np1 00:12:22.160 inet 192.168.100.9/24 scope global mlx_0_1 00:12:22.160 valid_lft forever preferred_lft forever 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.160 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:22.161 192.168.100.9' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:22.161 192.168.100.9' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:22.161 192.168.100.9' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2741012 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2741012 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2741012 ']' 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.161 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.161 [2024-11-20 12:26:27.688179] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:22.161 [2024-11-20 12:26:27.688289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.161 [2024-11-20 12:26:27.763181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.161 [2024-11-20 12:26:27.827594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.161 [2024-11-20 12:26:27.827656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.161 [2024-11-20 12:26:27.827671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.161 [2024-11-20 12:26:27.827684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.161 [2024-11-20 12:26:27.827696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.161 [2024-11-20 12:26:27.829026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.161 [2024-11-20 12:26:27.829069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.161 [2024-11-20 12:26:27.829130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.161 [2024-11-20 12:26:27.829135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.420 12:26:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.420 [2024-11-20 12:26:28.029266] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf1fdf0/0xf242e0) succeed. 00:12:22.420 [2024-11-20 12:26:28.044428] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf21480/0xf65980) succeed. 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 Null1 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 [2024-11-20 12:26:28.251692] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 Null2 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 Null3 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.679 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 Null4 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.680 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 4420 00:12:22.939 00:12:22.939 Discovery Log Number of Records 6, Generation counter 6 00:12:22.939 =====Discovery Log Entry 0====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: current discovery subsystem 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4420 00:12:22.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: explicit discovery connections, duplicate discovery information 00:12:22.939 rdma_prtype: not specified 00:12:22.939 rdma_qptype: connected 00:12:22.939 rdma_cms: rdma-cm 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 =====Discovery Log Entry 1====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: nvme subsystem 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4420 00:12:22.939 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: none 00:12:22.939 rdma_prtype: not specified 00:12:22.939 rdma_qptype: connected 00:12:22.939 rdma_cms: rdma-cm 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 =====Discovery Log Entry 2====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: nvme subsystem 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4420 00:12:22.939 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: none 00:12:22.939 rdma_prtype: not specified 00:12:22.939 rdma_qptype: connected 00:12:22.939 rdma_cms: rdma-cm 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 =====Discovery Log Entry 3====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: nvme subsystem 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4420 00:12:22.939 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: none 00:12:22.939 rdma_prtype: not specified 00:12:22.939 rdma_qptype: connected 00:12:22.939 rdma_cms: rdma-cm 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 =====Discovery Log Entry 4====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: nvme subsystem 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4420 00:12:22.939 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: none 00:12:22.939 rdma_prtype: not specified 00:12:22.939 rdma_qptype: connected 00:12:22.939 rdma_cms: rdma-cm 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 =====Discovery Log Entry 5====== 00:12:22.939 trtype: rdma 00:12:22.939 adrfam: ipv4 00:12:22.939 subtype: discovery subsystem referral 00:12:22.939 treq: not required 00:12:22.939 portid: 0 00:12:22.939 trsvcid: 4430 00:12:22.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.939 traddr: 192.168.100.8 00:12:22.939 eflags: none 00:12:22.939 rdma_prtype: unrecognized 00:12:22.939 rdma_qptype: unrecognized 00:12:22.939 rdma_cms: unrecognized 00:12:22.939 rdma_pkey: 0x0000 00:12:22.939 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:22.939 Perform nvmf subsystem discovery via RPC 00:12:22.939 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:22.939 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.939 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.939 [ 00:12:22.939 { 00:12:22.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.939 "subtype": "Discovery", 00:12:22.939 "listen_addresses": [ 00:12:22.939 { 00:12:22.939 "trtype": "RDMA", 00:12:22.939 "adrfam": "IPv4", 00:12:22.939 "traddr": "192.168.100.8", 00:12:22.939 "trsvcid": "4420" 00:12:22.939 } 00:12:22.939 ], 00:12:22.939 "allow_any_host": true, 00:12:22.939 "hosts": [] 00:12:22.939 }, 00:12:22.939 { 00:12:22.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.939 "subtype": "NVMe", 00:12:22.939 "listen_addresses": [ 00:12:22.939 { 00:12:22.939 "trtype": "RDMA", 00:12:22.939 "adrfam": "IPv4", 00:12:22.939 "traddr": "192.168.100.8", 00:12:22.939 "trsvcid": "4420" 00:12:22.939 } 00:12:22.939 ], 00:12:22.939 "allow_any_host": true, 00:12:22.939 "hosts": [], 00:12:22.939 "serial_number": "SPDK00000000000001", 00:12:22.939 "model_number": "SPDK bdev Controller", 00:12:22.939 "max_namespaces": 32, 00:12:22.939 "min_cntlid": 1, 00:12:22.939 "max_cntlid": 65519, 00:12:22.939 "namespaces": [ 00:12:22.939 { 00:12:22.940 "nsid": 1, 00:12:22.940 "bdev_name": "Null1", 00:12:22.940 "name": "Null1", 00:12:22.940 "nguid": "02CCCAC00A6C4F158250BBF2D87C3EE4", 00:12:22.940 "uuid": "02cccac0-0a6c-4f15-8250-bbf2d87c3ee4" 00:12:22.940 } 00:12:22.940 ] 00:12:22.940 }, 00:12:22.940 { 00:12:22.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.940 "subtype": "NVMe", 00:12:22.940 "listen_addresses": [ 00:12:22.940 { 00:12:22.940 "trtype": "RDMA", 00:12:22.940 "adrfam": "IPv4", 00:12:22.940 "traddr": "192.168.100.8", 00:12:22.940 "trsvcid": "4420" 00:12:22.940 } 00:12:22.940 ], 00:12:22.940 "allow_any_host": true, 00:12:22.940 "hosts": [], 00:12:22.940 "serial_number": "SPDK00000000000002", 00:12:22.940 "model_number": "SPDK bdev Controller", 00:12:22.940 "max_namespaces": 32, 00:12:22.940 "min_cntlid": 1, 00:12:22.940 "max_cntlid": 65519, 00:12:22.940 "namespaces": [ 00:12:22.940 { 00:12:22.940 "nsid": 1, 00:12:22.940 "bdev_name": "Null2", 00:12:22.940 "name": "Null2", 00:12:22.940 "nguid": "00B004ED728346819FB4727CCEFB1566", 00:12:22.940 "uuid": "00b004ed-7283-4681-9fb4-727ccefb1566" 00:12:22.940 } 00:12:22.940 ] 00:12:22.940 }, 00:12:22.940 { 00:12:22.940 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:22.940 "subtype": "NVMe", 00:12:22.940 "listen_addresses": [ 00:12:22.940 { 00:12:22.940 "trtype": "RDMA", 00:12:22.940 "adrfam": "IPv4", 00:12:22.940 "traddr": "192.168.100.8", 00:12:22.940 "trsvcid": "4420" 00:12:22.940 } 00:12:22.940 ], 00:12:22.940 "allow_any_host": true, 00:12:22.940 "hosts": [], 00:12:22.940 "serial_number": "SPDK00000000000003", 00:12:22.940 "model_number": "SPDK bdev Controller", 00:12:22.940 "max_namespaces": 32, 00:12:22.940 "min_cntlid": 1, 00:12:22.940 "max_cntlid": 65519, 00:12:22.940 "namespaces": [ 00:12:22.940 { 00:12:22.940 "nsid": 1, 00:12:22.940 "bdev_name": "Null3", 00:12:22.940 "name": "Null3", 00:12:22.940 "nguid": "D93FB8DDCA044D208F2E0D39A0DEF309", 00:12:22.940 "uuid": "d93fb8dd-ca04-4d20-8f2e-0d39a0def309" 00:12:22.940 } 00:12:22.940 ] 00:12:22.940 }, 00:12:22.940 { 00:12:22.940 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:22.940 "subtype": "NVMe", 00:12:22.940 "listen_addresses": [ 00:12:22.940 { 00:12:22.940 "trtype": "RDMA", 00:12:22.940 "adrfam": "IPv4", 00:12:22.940 "traddr": "192.168.100.8", 00:12:22.940 "trsvcid": "4420" 00:12:22.940 } 00:12:22.940 ], 00:12:22.940 "allow_any_host": true, 00:12:22.940 "hosts": [], 00:12:22.940 "serial_number": "SPDK00000000000004", 00:12:22.940 "model_number": "SPDK bdev Controller", 00:12:22.940 "max_namespaces": 32, 00:12:22.940 "min_cntlid": 1, 00:12:22.940 "max_cntlid": 65519, 00:12:22.940 "namespaces": [ 00:12:22.940 { 00:12:22.940 "nsid": 1, 00:12:22.940 "bdev_name": "Null4", 00:12:22.940 "name": "Null4", 00:12:22.940 "nguid": "D8065FDD38F04152B7F42F3418ACC372", 00:12:22.940 "uuid": "d8065fdd-38f0-4152-b7f4-2f3418acc372" 00:12:22.940 } 00:12:22.940 ] 00:12:22.940 } 00:12:22.940 ] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:22.940 rmmod nvme_rdma 00:12:22.940 rmmod nvme_fabrics 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2741012 ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2741012 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2741012 ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2741012 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2741012 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.940 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.941 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2741012' 00:12:22.941 killing process with pid 2741012 00:12:22.941 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2741012 00:12:22.941 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2741012 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:23.510 00:12:23.510 real 0m3.975s 00:12:23.510 user 0m5.365s 00:12:23.510 sys 0m2.224s 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.510 ************************************ 00:12:23.510 END TEST nvmf_target_discovery 00:12:23.510 ************************************ 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.510 12:26:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.510 ************************************ 00:12:23.510 START TEST nvmf_referrals 00:12:23.510 ************************************ 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:23.510 * Looking for test storage... 00:12:23.510 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.510 --rc genhtml_branch_coverage=1 00:12:23.510 --rc genhtml_function_coverage=1 00:12:23.510 --rc genhtml_legend=1 00:12:23.510 --rc geninfo_all_blocks=1 00:12:23.510 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.510 --rc genhtml_branch_coverage=1 00:12:23.510 --rc genhtml_function_coverage=1 00:12:23.510 --rc genhtml_legend=1 00:12:23.510 --rc geninfo_all_blocks=1 00:12:23.510 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.510 --rc genhtml_branch_coverage=1 00:12:23.510 --rc genhtml_function_coverage=1 00:12:23.510 --rc genhtml_legend=1 00:12:23.510 --rc geninfo_all_blocks=1 00:12:23.510 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.510 --rc genhtml_branch_coverage=1 00:12:23.510 --rc genhtml_function_coverage=1 00:12:23.510 --rc genhtml_legend=1 00:12:23.510 --rc geninfo_all_blocks=1 00:12:23.510 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.510 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.511 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.511 12:26:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:26.050 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:26.050 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:26.050 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:26.051 Found net devices under 0000:83:00.0: mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:26.051 Found net devices under 0000:83:00.1: mlx_0_1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:26.051 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:26.051 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:26.051 altname enp131s0f0np0 00:12:26.051 inet 192.168.100.8/24 scope global mlx_0_0 00:12:26.051 valid_lft forever preferred_lft forever 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:26.051 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:26.051 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:26.051 altname enp131s0f1np1 00:12:26.051 inet 192.168.100.9/24 scope global mlx_0_1 00:12:26.051 valid_lft forever preferred_lft forever 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:26.051 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:26.052 192.168.100.9' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:26.052 192.168.100.9' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:26.052 192.168.100.9' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2742501 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2742501 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2742501 ']' 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.052 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.052 [2024-11-20 12:26:31.651649] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:26.052 [2024-11-20 12:26:31.651762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.052 [2024-11-20 12:26:31.724974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.052 [2024-11-20 12:26:31.787679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.052 [2024-11-20 12:26:31.787738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.052 [2024-11-20 12:26:31.787754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.052 [2024-11-20 12:26:31.787767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.052 [2024-11-20 12:26:31.787779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.052 [2024-11-20 12:26:31.789107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.052 [2024-11-20 12:26:31.789203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.052 [2024-11-20 12:26:31.789255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.052 [2024-11-20 12:26:31.789258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.311 12:26:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.311 [2024-11-20 12:26:31.988436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf9edf0/0xfa32e0) succeed. 00:12:26.311 [2024-11-20 12:26:32.003867] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa0480/0xfe4980) succeed. 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.569 [2024-11-20 12:26:32.184488] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.569 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.570 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.829 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.087 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.088 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.346 12:26:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.346 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.604 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.864 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:27.865 rmmod nvme_rdma 00:12:27.865 rmmod nvme_fabrics 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2742501 ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2742501 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2742501 ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2742501 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742501 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742501' 00:12:27.865 killing process with pid 2742501 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2742501 00:12:27.865 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2742501 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:28.433 00:12:28.433 real 0m4.908s 00:12:28.433 user 0m9.939s 00:12:28.433 sys 0m2.541s 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 ************************************ 00:12:28.433 END TEST nvmf_referrals 00:12:28.433 ************************************ 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 ************************************ 00:12:28.433 START TEST nvmf_connect_disconnect 00:12:28.433 ************************************ 00:12:28.433 12:26:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:28.433 * Looking for test storage... 00:12:28.433 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.433 --rc genhtml_branch_coverage=1 00:12:28.433 --rc genhtml_function_coverage=1 00:12:28.433 --rc genhtml_legend=1 00:12:28.433 --rc geninfo_all_blocks=1 00:12:28.433 --rc geninfo_unexecuted_blocks=1 00:12:28.433 00:12:28.433 ' 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.433 --rc genhtml_branch_coverage=1 00:12:28.433 --rc genhtml_function_coverage=1 00:12:28.433 --rc genhtml_legend=1 00:12:28.433 --rc geninfo_all_blocks=1 00:12:28.433 --rc geninfo_unexecuted_blocks=1 00:12:28.433 00:12:28.433 ' 00:12:28.433 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.433 --rc genhtml_branch_coverage=1 00:12:28.433 --rc genhtml_function_coverage=1 00:12:28.433 --rc genhtml_legend=1 00:12:28.433 --rc geninfo_all_blocks=1 00:12:28.433 --rc geninfo_unexecuted_blocks=1 00:12:28.433 00:12:28.433 ' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.434 --rc genhtml_branch_coverage=1 00:12:28.434 --rc genhtml_function_coverage=1 00:12:28.434 --rc genhtml_legend=1 00:12:28.434 --rc geninfo_all_blocks=1 00:12:28.434 --rc geninfo_unexecuted_blocks=1 00:12:28.434 00:12:28.434 ' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.434 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.434 12:26:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:30.975 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:30.975 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:30.975 Found net devices under 0000:83:00.0: mlx_0_0 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:30.975 Found net devices under 0000:83:00.1: mlx_0_1 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:30.975 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:30.976 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:30.976 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:30.976 altname enp131s0f0np0 00:12:30.976 inet 192.168.100.8/24 scope global mlx_0_0 00:12:30.976 valid_lft forever preferred_lft forever 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:30.976 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:30.976 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:30.976 altname enp131s0f1np1 00:12:30.976 inet 192.168.100.9/24 scope global mlx_0_1 00:12:30.976 valid_lft forever preferred_lft forever 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:30.976 192.168.100.9' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:30.976 192.168.100.9' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:30.976 192.168.100.9' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2744103 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2744103 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2744103 ']' 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.976 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.977 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.977 [2024-11-20 12:26:36.623611] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:30.977 [2024-11-20 12:26:36.623699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.977 [2024-11-20 12:26:36.694866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.235 [2024-11-20 12:26:36.757501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.235 [2024-11-20 12:26:36.757562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.235 [2024-11-20 12:26:36.757578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.235 [2024-11-20 12:26:36.757592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.235 [2024-11-20 12:26:36.757603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.235 [2024-11-20 12:26:36.758902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.235 [2024-11-20 12:26:36.759016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.235 [2024-11-20 12:26:36.759082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.235 [2024-11-20 12:26:36.759086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.235 12:26:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.235 [2024-11-20 12:26:36.945355] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:31.235 [2024-11-20 12:26:36.974249] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc9df0/0xdce2e0) succeed. 00:12:31.235 [2024-11-20 12:26:36.989782] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdcb480/0xe0f980) succeed. 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.493 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.494 [2024-11-20 12:26:37.172183] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:31.494 12:26:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:35.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:51.624 rmmod nvme_rdma 00:12:51.624 rmmod nvme_fabrics 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2744103 ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2744103 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2744103 ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2744103 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744103 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744103' 00:12:51.624 killing process with pid 2744103 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2744103 00:12:51.624 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2744103 00:12:51.883 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.883 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:51.883 00:12:51.883 real 0m23.672s 00:12:51.883 user 1m23.502s 00:12:51.883 sys 0m2.870s 00:12:51.883 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.883 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.883 ************************************ 00:12:51.883 END TEST nvmf_connect_disconnect 00:12:51.883 ************************************ 00:12:52.143 12:26:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:52.143 12:26:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.143 12:26:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.143 12:26:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.143 ************************************ 00:12:52.143 START TEST nvmf_multitarget 00:12:52.143 ************************************ 00:12:52.143 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:52.144 * Looking for test storage... 00:12:52.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.144 --rc genhtml_branch_coverage=1 00:12:52.144 --rc genhtml_function_coverage=1 00:12:52.144 --rc genhtml_legend=1 00:12:52.144 --rc geninfo_all_blocks=1 00:12:52.144 --rc geninfo_unexecuted_blocks=1 00:12:52.144 00:12:52.144 ' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.144 --rc genhtml_branch_coverage=1 00:12:52.144 --rc genhtml_function_coverage=1 00:12:52.144 --rc genhtml_legend=1 00:12:52.144 --rc geninfo_all_blocks=1 00:12:52.144 --rc geninfo_unexecuted_blocks=1 00:12:52.144 00:12:52.144 ' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.144 --rc genhtml_branch_coverage=1 00:12:52.144 --rc genhtml_function_coverage=1 00:12:52.144 --rc genhtml_legend=1 00:12:52.144 --rc geninfo_all_blocks=1 00:12:52.144 --rc geninfo_unexecuted_blocks=1 00:12:52.144 00:12:52.144 ' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.144 --rc genhtml_branch_coverage=1 00:12:52.144 --rc genhtml_function_coverage=1 00:12:52.144 --rc genhtml_legend=1 00:12:52.144 --rc geninfo_all_blocks=1 00:12:52.144 --rc geninfo_unexecuted_blocks=1 00:12:52.144 00:12:52.144 ' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.144 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.145 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.145 12:26:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:54.684 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:54.684 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.684 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:54.684 Found net devices under 0000:83:00.0: mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:54.685 Found net devices under 0000:83:00.1: mlx_0_1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:54.685 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.685 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:54.685 altname enp131s0f0np0 00:12:54.685 inet 192.168.100.8/24 scope global mlx_0_0 00:12:54.685 valid_lft forever preferred_lft forever 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:54.685 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.685 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:54.685 altname enp131s0f1np1 00:12:54.685 inet 192.168.100.9/24 scope global mlx_0_1 00:12:54.685 valid_lft forever preferred_lft forever 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.685 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:54.686 192.168.100.9' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:54.686 192.168.100.9' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:54.686 192.168.100.9' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2747367 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2747367 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2747367 ']' 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.686 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.686 [2024-11-20 12:27:00.262537] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:54.686 [2024-11-20 12:27:00.262645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.686 [2024-11-20 12:27:00.337325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.686 [2024-11-20 12:27:00.400847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.686 [2024-11-20 12:27:00.400904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.686 [2024-11-20 12:27:00.400920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.686 [2024-11-20 12:27:00.400933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.686 [2024-11-20 12:27:00.400944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.686 [2024-11-20 12:27:00.402237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.686 [2024-11-20 12:27:00.402290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.686 [2024-11-20 12:27:00.402351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.686 [2024-11-20 12:27:00.402348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:54.945 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:55.203 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:55.203 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:55.203 "nvmf_tgt_1" 00:12:55.203 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:55.461 "nvmf_tgt_2" 00:12:55.461 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:55.461 12:27:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:55.461 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:55.461 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:55.720 true 00:12:55.720 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:55.720 true 00:12:55.720 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:55.720 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:55.978 rmmod nvme_rdma 00:12:55.978 rmmod nvme_fabrics 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2747367 ']' 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2747367 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2747367 ']' 00:12:55.978 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2747367 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747367 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747367' 00:12:55.979 killing process with pid 2747367 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2747367 00:12:55.979 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2747367 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:56.237 00:12:56.237 real 0m4.154s 00:12:56.237 user 0m7.459s 00:12:56.237 sys 0m2.173s 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:56.237 ************************************ 00:12:56.237 END TEST nvmf_multitarget 00:12:56.237 ************************************ 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.237 ************************************ 00:12:56.237 START TEST nvmf_rpc 00:12:56.237 ************************************ 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:56.237 * Looking for test storage... 00:12:56.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.237 12:27:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.496 --rc genhtml_branch_coverage=1 00:12:56.496 --rc genhtml_function_coverage=1 00:12:56.496 --rc genhtml_legend=1 00:12:56.496 --rc geninfo_all_blocks=1 00:12:56.496 --rc geninfo_unexecuted_blocks=1 00:12:56.496 00:12:56.496 ' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.496 --rc genhtml_branch_coverage=1 00:12:56.496 --rc genhtml_function_coverage=1 00:12:56.496 --rc genhtml_legend=1 00:12:56.496 --rc geninfo_all_blocks=1 00:12:56.496 --rc geninfo_unexecuted_blocks=1 00:12:56.496 00:12:56.496 ' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.496 --rc genhtml_branch_coverage=1 00:12:56.496 --rc genhtml_function_coverage=1 00:12:56.496 --rc genhtml_legend=1 00:12:56.496 --rc geninfo_all_blocks=1 00:12:56.496 --rc geninfo_unexecuted_blocks=1 00:12:56.496 00:12:56.496 ' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.496 --rc genhtml_branch_coverage=1 00:12:56.496 --rc genhtml_function_coverage=1 00:12:56.496 --rc genhtml_legend=1 00:12:56.496 --rc geninfo_all_blocks=1 00:12:56.496 --rc geninfo_unexecuted_blocks=1 00:12:56.496 00:12:56.496 ' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.496 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.497 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.497 12:27:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:12:59.033 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:12:59.033 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:12:59.033 Found net devices under 0000:83:00.0: mlx_0_0 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:12:59.033 Found net devices under 0000:83:00.1: mlx_0_1 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:12:59.033 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:59.034 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.034 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:12:59.034 altname enp131s0f0np0 00:12:59.034 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.034 valid_lft forever preferred_lft forever 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:59.034 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.034 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:12:59.034 altname enp131s0f1np1 00:12:59.034 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.034 valid_lft forever preferred_lft forever 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.034 192.168.100.9' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:59.034 192.168.100.9' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:59.034 192.168.100.9' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2748970 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2748970 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2748970 ']' 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.034 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.035 [2024-11-20 12:27:04.454286] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:59.035 [2024-11-20 12:27:04.454390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.035 [2024-11-20 12:27:04.528187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.035 [2024-11-20 12:27:04.590839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.035 [2024-11-20 12:27:04.590900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.035 [2024-11-20 12:27:04.590918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.035 [2024-11-20 12:27:04.590932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.035 [2024-11-20 12:27:04.590944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.035 [2024-11-20 12:27:04.592230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.035 [2024-11-20 12:27:04.592283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.035 [2024-11-20 12:27:04.592333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.035 [2024-11-20 12:27:04.592337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:59.035 "tick_rate": 2700000000, 00:12:59.035 "poll_groups": [ 00:12:59.035 { 00:12:59.035 "name": "nvmf_tgt_poll_group_000", 00:12:59.035 "admin_qpairs": 0, 00:12:59.035 "io_qpairs": 0, 00:12:59.035 "current_admin_qpairs": 0, 00:12:59.035 "current_io_qpairs": 0, 00:12:59.035 "pending_bdev_io": 0, 00:12:59.035 "completed_nvme_io": 0, 00:12:59.035 "transports": [] 00:12:59.035 }, 00:12:59.035 { 00:12:59.035 "name": "nvmf_tgt_poll_group_001", 00:12:59.035 "admin_qpairs": 0, 00:12:59.035 "io_qpairs": 0, 00:12:59.035 "current_admin_qpairs": 0, 00:12:59.035 "current_io_qpairs": 0, 00:12:59.035 "pending_bdev_io": 0, 00:12:59.035 "completed_nvme_io": 0, 00:12:59.035 "transports": [] 00:12:59.035 }, 00:12:59.035 { 00:12:59.035 "name": "nvmf_tgt_poll_group_002", 00:12:59.035 "admin_qpairs": 0, 00:12:59.035 "io_qpairs": 0, 00:12:59.035 "current_admin_qpairs": 0, 00:12:59.035 "current_io_qpairs": 0, 00:12:59.035 "pending_bdev_io": 0, 00:12:59.035 "completed_nvme_io": 0, 00:12:59.035 "transports": [] 00:12:59.035 }, 00:12:59.035 { 00:12:59.035 "name": "nvmf_tgt_poll_group_003", 00:12:59.035 "admin_qpairs": 0, 00:12:59.035 "io_qpairs": 0, 00:12:59.035 "current_admin_qpairs": 0, 00:12:59.035 "current_io_qpairs": 0, 00:12:59.035 "pending_bdev_io": 0, 00:12:59.035 "completed_nvme_io": 0, 00:12:59.035 "transports": [] 00:12:59.035 } 00:12:59.035 ] 00:12:59.035 }' 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:59.035 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.294 12:27:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.294 [2024-11-20 12:27:04.903650] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7bcdf0/0x7c12e0) succeed. 00:12:59.294 [2024-11-20 12:27:04.918604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7be480/0x802980) succeed. 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.552 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:59.552 "tick_rate": 2700000000, 00:12:59.552 "poll_groups": [ 00:12:59.552 { 00:12:59.552 "name": "nvmf_tgt_poll_group_000", 00:12:59.552 "admin_qpairs": 0, 00:12:59.552 "io_qpairs": 0, 00:12:59.552 "current_admin_qpairs": 0, 00:12:59.552 "current_io_qpairs": 0, 00:12:59.552 "pending_bdev_io": 0, 00:12:59.552 "completed_nvme_io": 0, 00:12:59.552 "transports": [ 00:12:59.552 { 00:12:59.552 "trtype": "RDMA", 00:12:59.552 "pending_data_buffer": 0, 00:12:59.552 "devices": [ 00:12:59.552 { 00:12:59.552 "name": "mlx5_0", 00:12:59.552 "polls": 19165, 00:12:59.552 "idle_polls": 19165, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.552 "pending_free_request": 0, 00:12:59.552 "pending_rdma_read": 0, 00:12:59.552 "pending_rdma_write": 0, 00:12:59.552 "pending_rdma_send": 0, 00:12:59.552 "total_send_wrs": 0, 00:12:59.552 "send_doorbell_updates": 0, 00:12:59.552 "total_recv_wrs": 4096, 00:12:59.552 "recv_doorbell_updates": 1 00:12:59.552 }, 00:12:59.552 { 00:12:59.552 "name": "mlx5_1", 00:12:59.552 "polls": 19165, 00:12:59.552 "idle_polls": 19165, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.552 "pending_free_request": 0, 00:12:59.552 "pending_rdma_read": 0, 00:12:59.552 "pending_rdma_write": 0, 00:12:59.552 "pending_rdma_send": 0, 00:12:59.552 "total_send_wrs": 0, 00:12:59.552 "send_doorbell_updates": 0, 00:12:59.552 "total_recv_wrs": 4096, 00:12:59.552 "recv_doorbell_updates": 1 00:12:59.552 } 00:12:59.552 ] 00:12:59.552 } 00:12:59.552 ] 00:12:59.552 }, 00:12:59.552 { 00:12:59.552 "name": "nvmf_tgt_poll_group_001", 00:12:59.552 "admin_qpairs": 0, 00:12:59.552 "io_qpairs": 0, 00:12:59.552 "current_admin_qpairs": 0, 00:12:59.552 "current_io_qpairs": 0, 00:12:59.552 "pending_bdev_io": 0, 00:12:59.552 "completed_nvme_io": 0, 00:12:59.552 "transports": [ 00:12:59.552 { 00:12:59.552 "trtype": "RDMA", 00:12:59.552 "pending_data_buffer": 0, 00:12:59.552 "devices": [ 00:12:59.552 { 00:12:59.552 "name": "mlx5_0", 00:12:59.552 "polls": 12985, 00:12:59.552 "idle_polls": 12985, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.552 "pending_free_request": 0, 00:12:59.552 "pending_rdma_read": 0, 00:12:59.552 "pending_rdma_write": 0, 00:12:59.552 "pending_rdma_send": 0, 00:12:59.552 "total_send_wrs": 0, 00:12:59.552 "send_doorbell_updates": 0, 00:12:59.552 "total_recv_wrs": 4096, 00:12:59.552 "recv_doorbell_updates": 1 00:12:59.552 }, 00:12:59.552 { 00:12:59.552 "name": "mlx5_1", 00:12:59.552 "polls": 12985, 00:12:59.552 "idle_polls": 12985, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.552 "pending_free_request": 0, 00:12:59.552 "pending_rdma_read": 0, 00:12:59.552 "pending_rdma_write": 0, 00:12:59.552 "pending_rdma_send": 0, 00:12:59.552 "total_send_wrs": 0, 00:12:59.552 "send_doorbell_updates": 0, 00:12:59.552 "total_recv_wrs": 4096, 00:12:59.552 "recv_doorbell_updates": 1 00:12:59.552 } 00:12:59.552 ] 00:12:59.552 } 00:12:59.552 ] 00:12:59.552 }, 00:12:59.552 { 00:12:59.552 "name": "nvmf_tgt_poll_group_002", 00:12:59.552 "admin_qpairs": 0, 00:12:59.552 "io_qpairs": 0, 00:12:59.552 "current_admin_qpairs": 0, 00:12:59.552 "current_io_qpairs": 0, 00:12:59.552 "pending_bdev_io": 0, 00:12:59.552 "completed_nvme_io": 0, 00:12:59.552 "transports": [ 00:12:59.552 { 00:12:59.552 "trtype": "RDMA", 00:12:59.552 "pending_data_buffer": 0, 00:12:59.552 "devices": [ 00:12:59.552 { 00:12:59.552 "name": "mlx5_0", 00:12:59.552 "polls": 7025, 00:12:59.552 "idle_polls": 7025, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.552 "pending_free_request": 0, 00:12:59.552 "pending_rdma_read": 0, 00:12:59.552 "pending_rdma_write": 0, 00:12:59.552 "pending_rdma_send": 0, 00:12:59.552 "total_send_wrs": 0, 00:12:59.552 "send_doorbell_updates": 0, 00:12:59.552 "total_recv_wrs": 4096, 00:12:59.552 "recv_doorbell_updates": 1 00:12:59.552 }, 00:12:59.552 { 00:12:59.552 "name": "mlx5_1", 00:12:59.552 "polls": 7025, 00:12:59.552 "idle_polls": 7025, 00:12:59.552 "completions": 0, 00:12:59.552 "requests": 0, 00:12:59.552 "request_latency": 0, 00:12:59.553 "pending_free_request": 0, 00:12:59.553 "pending_rdma_read": 0, 00:12:59.553 "pending_rdma_write": 0, 00:12:59.553 "pending_rdma_send": 0, 00:12:59.553 "total_send_wrs": 0, 00:12:59.553 "send_doorbell_updates": 0, 00:12:59.553 "total_recv_wrs": 4096, 00:12:59.553 "recv_doorbell_updates": 1 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "name": "nvmf_tgt_poll_group_003", 00:12:59.553 "admin_qpairs": 0, 00:12:59.553 "io_qpairs": 0, 00:12:59.553 "current_admin_qpairs": 0, 00:12:59.553 "current_io_qpairs": 0, 00:12:59.553 "pending_bdev_io": 0, 00:12:59.553 "completed_nvme_io": 0, 00:12:59.553 "transports": [ 00:12:59.553 { 00:12:59.553 "trtype": "RDMA", 00:12:59.553 "pending_data_buffer": 0, 00:12:59.553 "devices": [ 00:12:59.553 { 00:12:59.553 "name": "mlx5_0", 00:12:59.553 "polls": 928, 00:12:59.553 "idle_polls": 928, 00:12:59.553 "completions": 0, 00:12:59.553 "requests": 0, 00:12:59.553 "request_latency": 0, 00:12:59.553 "pending_free_request": 0, 00:12:59.553 "pending_rdma_read": 0, 00:12:59.553 "pending_rdma_write": 0, 00:12:59.553 "pending_rdma_send": 0, 00:12:59.553 "total_send_wrs": 0, 00:12:59.553 "send_doorbell_updates": 0, 00:12:59.553 "total_recv_wrs": 4096, 00:12:59.553 "recv_doorbell_updates": 1 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "name": "mlx5_1", 00:12:59.553 "polls": 928, 00:12:59.553 "idle_polls": 928, 00:12:59.553 "completions": 0, 00:12:59.553 "requests": 0, 00:12:59.553 "request_latency": 0, 00:12:59.553 "pending_free_request": 0, 00:12:59.553 "pending_rdma_read": 0, 00:12:59.553 "pending_rdma_write": 0, 00:12:59.553 "pending_rdma_send": 0, 00:12:59.553 "total_send_wrs": 0, 00:12:59.553 "send_doorbell_updates": 0, 00:12:59.553 "total_recv_wrs": 4096, 00:12:59.553 "recv_doorbell_updates": 1 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 }' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:59.553 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.811 Malloc1 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.811 [2024-11-20 12:27:05.405528] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -a 192.168.100.8 -s 4420 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -a 192.168.100.8 -s 4420 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -a 192.168.100.8 -s 4420 00:12:59.811 [2024-11-20 12:27:05.445829] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae' 00:12:59.811 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:59.811 could not add new controller: failed to write to nvme-fabrics device 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.811 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.812 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:12:59.812 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.812 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.812 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.812 12:27:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:00.744 12:27:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.744 12:27:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.744 12:27:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.744 12:27:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.744 12:27:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.273 12:27:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.839 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.839 [2024-11-20 12:27:09.588602] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae' 00:13:04.097 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:04.097 could not add new controller: failed to write to nvme-fabrics device 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.097 12:27:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:05.030 12:27:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.030 12:27:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.030 12:27:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.030 12:27:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.030 12:27:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:06.929 12:27:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.863 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 [2024-11-20 12:27:13.664248] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.122 12:27:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:09.055 12:27:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.055 12:27:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:09.055 12:27:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.055 12:27:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:09.055 12:27:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.954 12:27:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 [2024-11-20 12:27:17.719032] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.328 12:27:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:13.261 12:27:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.261 12:27:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:13.261 12:27:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.261 12:27:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:13.261 12:27:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:15.159 12:27:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 [2024-11-20 12:27:21.760556] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.092 12:27:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:17.026 12:27:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.026 12:27:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.026 12:27:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.026 12:27:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:17.026 12:27:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:19.556 12:27:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 [2024-11-20 12:27:25.799926] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.123 12:27:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:21.076 12:27:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.076 12:27:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.076 12:27:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.076 12:27:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.076 12:27:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:23.634 12:27:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.200 [2024-11-20 12:27:29.847904] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.200 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.201 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.201 12:27:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:25.134 12:27:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.134 12:27:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:25.134 12:27:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.134 12:27:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:25.134 12:27:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:27.660 12:27:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 [2024-11-20 12:27:33.934427] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.223 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.223 [2024-11-20 12:27:33.986568] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.481 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.481 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 [2024-11-20 12:27:34.038747] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 [2024-11-20 12:27:34.088157] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 [2024-11-20 12:27:34.137475] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:28.482 "tick_rate": 2700000000, 00:13:28.482 "poll_groups": [ 00:13:28.482 { 00:13:28.482 "name": "nvmf_tgt_poll_group_000", 00:13:28.482 "admin_qpairs": 2, 00:13:28.482 "io_qpairs": 27, 00:13:28.482 "current_admin_qpairs": 0, 00:13:28.482 "current_io_qpairs": 0, 00:13:28.482 "pending_bdev_io": 0, 00:13:28.482 "completed_nvme_io": 92, 00:13:28.482 "transports": [ 00:13:28.482 { 00:13:28.482 "trtype": "RDMA", 00:13:28.482 "pending_data_buffer": 0, 00:13:28.482 "devices": [ 00:13:28.482 { 00:13:28.482 "name": "mlx5_0", 00:13:28.482 "polls": 3387793, 00:13:28.482 "idle_polls": 3387515, 00:13:28.482 "completions": 299, 00:13:28.482 "requests": 149, 00:13:28.482 "request_latency": 40526972, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 242, 00:13:28.482 "send_doorbell_updates": 139, 00:13:28.482 "total_recv_wrs": 4245, 00:13:28.482 "recv_doorbell_updates": 139 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "mlx5_1", 00:13:28.482 "polls": 3387793, 00:13:28.482 "idle_polls": 3387793, 00:13:28.482 "completions": 0, 00:13:28.482 "requests": 0, 00:13:28.482 "request_latency": 0, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 0, 00:13:28.482 "send_doorbell_updates": 0, 00:13:28.482 "total_recv_wrs": 4096, 00:13:28.482 "recv_doorbell_updates": 1 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "nvmf_tgt_poll_group_001", 00:13:28.482 "admin_qpairs": 2, 00:13:28.482 "io_qpairs": 26, 00:13:28.482 "current_admin_qpairs": 0, 00:13:28.482 "current_io_qpairs": 0, 00:13:28.482 "pending_bdev_io": 0, 00:13:28.482 "completed_nvme_io": 82, 00:13:28.482 "transports": [ 00:13:28.482 { 00:13:28.482 "trtype": "RDMA", 00:13:28.482 "pending_data_buffer": 0, 00:13:28.482 "devices": [ 00:13:28.482 { 00:13:28.482 "name": "mlx5_0", 00:13:28.482 "polls": 3466706, 00:13:28.482 "idle_polls": 3466457, 00:13:28.482 "completions": 272, 00:13:28.482 "requests": 136, 00:13:28.482 "request_latency": 36993956, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 218, 00:13:28.482 "send_doorbell_updates": 125, 00:13:28.482 "total_recv_wrs": 4232, 00:13:28.482 "recv_doorbell_updates": 126 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "mlx5_1", 00:13:28.482 "polls": 3466706, 00:13:28.482 "idle_polls": 3466706, 00:13:28.482 "completions": 0, 00:13:28.482 "requests": 0, 00:13:28.482 "request_latency": 0, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 0, 00:13:28.482 "send_doorbell_updates": 0, 00:13:28.482 "total_recv_wrs": 4096, 00:13:28.482 "recv_doorbell_updates": 1 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "nvmf_tgt_poll_group_002", 00:13:28.482 "admin_qpairs": 1, 00:13:28.482 "io_qpairs": 26, 00:13:28.482 "current_admin_qpairs": 0, 00:13:28.482 "current_io_qpairs": 0, 00:13:28.482 "pending_bdev_io": 0, 00:13:28.482 "completed_nvme_io": 156, 00:13:28.482 "transports": [ 00:13:28.482 { 00:13:28.482 "trtype": "RDMA", 00:13:28.482 "pending_data_buffer": 0, 00:13:28.482 "devices": [ 00:13:28.482 { 00:13:28.482 "name": "mlx5_0", 00:13:28.482 "polls": 3440065, 00:13:28.482 "idle_polls": 3439753, 00:13:28.482 "completions": 371, 00:13:28.482 "requests": 185, 00:13:28.482 "request_latency": 77268048, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 330, 00:13:28.482 "send_doorbell_updates": 150, 00:13:28.482 "total_recv_wrs": 4281, 00:13:28.482 "recv_doorbell_updates": 150 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "mlx5_1", 00:13:28.482 "polls": 3440065, 00:13:28.482 "idle_polls": 3440065, 00:13:28.482 "completions": 0, 00:13:28.482 "requests": 0, 00:13:28.482 "request_latency": 0, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 0, 00:13:28.482 "send_doorbell_updates": 0, 00:13:28.482 "total_recv_wrs": 4096, 00:13:28.482 "recv_doorbell_updates": 1 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "nvmf_tgt_poll_group_003", 00:13:28.482 "admin_qpairs": 2, 00:13:28.482 "io_qpairs": 26, 00:13:28.482 "current_admin_qpairs": 0, 00:13:28.482 "current_io_qpairs": 0, 00:13:28.482 "pending_bdev_io": 0, 00:13:28.482 "completed_nvme_io": 125, 00:13:28.482 "transports": [ 00:13:28.482 { 00:13:28.482 "trtype": "RDMA", 00:13:28.482 "pending_data_buffer": 0, 00:13:28.482 "devices": [ 00:13:28.482 { 00:13:28.482 "name": "mlx5_0", 00:13:28.482 "polls": 2805977, 00:13:28.482 "idle_polls": 2805659, 00:13:28.482 "completions": 360, 00:13:28.482 "requests": 180, 00:13:28.482 "request_latency": 62011544, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 306, 00:13:28.482 "send_doorbell_updates": 155, 00:13:28.482 "total_recv_wrs": 4276, 00:13:28.482 "recv_doorbell_updates": 156 00:13:28.482 }, 00:13:28.482 { 00:13:28.482 "name": "mlx5_1", 00:13:28.482 "polls": 2805977, 00:13:28.482 "idle_polls": 2805977, 00:13:28.482 "completions": 0, 00:13:28.482 "requests": 0, 00:13:28.482 "request_latency": 0, 00:13:28.482 "pending_free_request": 0, 00:13:28.482 "pending_rdma_read": 0, 00:13:28.482 "pending_rdma_write": 0, 00:13:28.482 "pending_rdma_send": 0, 00:13:28.482 "total_send_wrs": 0, 00:13:28.482 "send_doorbell_updates": 0, 00:13:28.482 "total_recv_wrs": 4096, 00:13:28.482 "recv_doorbell_updates": 1 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 } 00:13:28.482 ] 00:13:28.482 }' 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:28.482 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1302 > 0 )) 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 216800520 > 0 )) 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:28.741 rmmod nvme_rdma 00:13:28.741 rmmod nvme_fabrics 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2748970 ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2748970 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2748970 ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2748970 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748970 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748970' 00:13:28.741 killing process with pid 2748970 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2748970 00:13:28.741 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2748970 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:29.310 00:13:29.310 real 0m32.936s 00:13:29.310 user 2m0.995s 00:13:29.310 sys 0m3.310s 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.310 ************************************ 00:13:29.310 END TEST nvmf_rpc 00:13:29.310 ************************************ 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.310 ************************************ 00:13:29.310 START TEST nvmf_invalid 00:13:29.310 ************************************ 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:29.310 * Looking for test storage... 00:13:29.310 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.310 12:27:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.310 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.571 --rc genhtml_branch_coverage=1 00:13:29.571 --rc genhtml_function_coverage=1 00:13:29.571 --rc genhtml_legend=1 00:13:29.571 --rc geninfo_all_blocks=1 00:13:29.571 --rc geninfo_unexecuted_blocks=1 00:13:29.571 00:13:29.571 ' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.571 --rc genhtml_branch_coverage=1 00:13:29.571 --rc genhtml_function_coverage=1 00:13:29.571 --rc genhtml_legend=1 00:13:29.571 --rc geninfo_all_blocks=1 00:13:29.571 --rc geninfo_unexecuted_blocks=1 00:13:29.571 00:13:29.571 ' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.571 --rc genhtml_branch_coverage=1 00:13:29.571 --rc genhtml_function_coverage=1 00:13:29.571 --rc genhtml_legend=1 00:13:29.571 --rc geninfo_all_blocks=1 00:13:29.571 --rc geninfo_unexecuted_blocks=1 00:13:29.571 00:13:29.571 ' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.571 --rc genhtml_branch_coverage=1 00:13:29.571 --rc genhtml_function_coverage=1 00:13:29.571 --rc genhtml_legend=1 00:13:29.571 --rc geninfo_all_blocks=1 00:13:29.571 --rc geninfo_unexecuted_blocks=1 00:13:29.571 00:13:29.571 ' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.571 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.571 12:27:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:13:32.099 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:32.099 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:13:32.100 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:13:32.100 Found net devices under 0000:83:00.0: mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:13:32.100 Found net devices under 0000:83:00.1: mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:32.100 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:32.100 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:13:32.100 altname enp131s0f0np0 00:13:32.100 inet 192.168.100.8/24 scope global mlx_0_0 00:13:32.100 valid_lft forever preferred_lft forever 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:32.100 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:32.100 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:13:32.100 altname enp131s0f1np1 00:13:32.100 inet 192.168.100.9/24 scope global mlx_0_1 00:13:32.100 valid_lft forever preferred_lft forever 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:32.100 192.168.100.9' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:32.100 192.168.100.9' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:32.100 192.168.100.9' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2753393 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2753393 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2753393 ']' 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.100 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.101 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.101 [2024-11-20 12:27:37.595784] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:32.101 [2024-11-20 12:27:37.595890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.101 [2024-11-20 12:27:37.669811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.101 [2024-11-20 12:27:37.734335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.101 [2024-11-20 12:27:37.734395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.101 [2024-11-20 12:27:37.734411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.101 [2024-11-20 12:27:37.734424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.101 [2024-11-20 12:27:37.734435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.101 [2024-11-20 12:27:37.738536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.101 [2024-11-20 12:27:37.738619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.101 [2024-11-20 12:27:37.738705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.101 [2024-11-20 12:27:37.738738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:32.358 12:27:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11939 00:13:32.615 [2024-11-20 12:27:38.193897] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:32.616 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:32.616 { 00:13:32.616 "nqn": "nqn.2016-06.io.spdk:cnode11939", 00:13:32.616 "tgt_name": "foobar", 00:13:32.616 "method": "nvmf_create_subsystem", 00:13:32.616 "req_id": 1 00:13:32.616 } 00:13:32.616 Got JSON-RPC error response 00:13:32.616 response: 00:13:32.616 { 00:13:32.616 "code": -32603, 00:13:32.616 "message": "Unable to find target foobar" 00:13:32.616 }' 00:13:32.616 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:32.616 { 00:13:32.616 "nqn": "nqn.2016-06.io.spdk:cnode11939", 00:13:32.616 "tgt_name": "foobar", 00:13:32.616 "method": "nvmf_create_subsystem", 00:13:32.616 "req_id": 1 00:13:32.616 } 00:13:32.616 Got JSON-RPC error response 00:13:32.616 response: 00:13:32.616 { 00:13:32.616 "code": -32603, 00:13:32.616 "message": "Unable to find target foobar" 00:13:32.616 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:32.616 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:32.616 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1859 00:13:32.873 [2024-11-20 12:27:38.531083] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1859: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:32.874 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:32.874 { 00:13:32.874 "nqn": "nqn.2016-06.io.spdk:cnode1859", 00:13:32.874 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.874 "method": "nvmf_create_subsystem", 00:13:32.874 "req_id": 1 00:13:32.874 } 00:13:32.874 Got JSON-RPC error response 00:13:32.874 response: 00:13:32.874 { 00:13:32.874 "code": -32602, 00:13:32.874 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.874 }' 00:13:32.874 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:32.874 { 00:13:32.874 "nqn": "nqn.2016-06.io.spdk:cnode1859", 00:13:32.874 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.874 "method": "nvmf_create_subsystem", 00:13:32.874 "req_id": 1 00:13:32.874 } 00:13:32.874 Got JSON-RPC error response 00:13:32.874 response: 00:13:32.874 { 00:13:32.874 "code": -32602, 00:13:32.874 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.874 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.874 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:32.874 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27465 00:13:33.131 [2024-11-20 12:27:38.868223] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27465: invalid model number 'SPDK_Controller' 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:33.131 { 00:13:33.131 "nqn": "nqn.2016-06.io.spdk:cnode27465", 00:13:33.131 "model_number": "SPDK_Controller\u001f", 00:13:33.131 "method": "nvmf_create_subsystem", 00:13:33.131 "req_id": 1 00:13:33.131 } 00:13:33.131 Got JSON-RPC error response 00:13:33.131 response: 00:13:33.131 { 00:13:33.131 "code": -32602, 00:13:33.131 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.131 }' 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:33.131 { 00:13:33.131 "nqn": "nqn.2016-06.io.spdk:cnode27465", 00:13:33.131 "model_number": "SPDK_Controller\u001f", 00:13:33.131 "method": "nvmf_create_subsystem", 00:13:33.131 "req_id": 1 00:13:33.131 } 00:13:33.131 Got JSON-RPC error response 00:13:33.131 response: 00:13:33.131 { 00:13:33.131 "code": -32602, 00:13:33.131 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.131 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.131 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fy2UO:c"a"j7N@~6_OZ`g' 00:13:33.389 12:27:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'fy2UO:c"a"j7N@~6_OZ`g' nqn.2016-06.io.spdk:cnode24224 00:13:33.649 [2024-11-20 12:27:39.289561] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24224: invalid serial number 'fy2UO:c"a"j7N@~6_OZ`g' 00:13:33.649 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:33.649 { 00:13:33.649 "nqn": "nqn.2016-06.io.spdk:cnode24224", 00:13:33.649 "serial_number": "fy2UO:c\"a\"j7N@~6_OZ`g", 00:13:33.649 "method": "nvmf_create_subsystem", 00:13:33.649 "req_id": 1 00:13:33.649 } 00:13:33.649 Got JSON-RPC error response 00:13:33.649 response: 00:13:33.649 { 00:13:33.649 "code": -32602, 00:13:33.649 "message": "Invalid SN fy2UO:c\"a\"j7N@~6_OZ`g" 00:13:33.649 }' 00:13:33.649 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:33.649 { 00:13:33.649 "nqn": "nqn.2016-06.io.spdk:cnode24224", 00:13:33.649 "serial_number": "fy2UO:c\"a\"j7N@~6_OZ`g", 00:13:33.649 "method": "nvmf_create_subsystem", 00:13:33.649 "req_id": 1 00:13:33.649 } 00:13:33.649 Got JSON-RPC error response 00:13:33.649 response: 00:13:33.649 { 00:13:33.649 "code": -32602, 00:13:33.649 "message": "Invalid SN fy2UO:c\"a\"j7N@~6_OZ`g" 00:13:33.650 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:33.650 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.651 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:33.652 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:33.653 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:33.654 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:33.913 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$}I&9f[idgd\itqT8p!(c1xOuZ3>3`h7wo7&!PAt' 00:13:33.914 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$}I&9f[idgd\itqT8p!(c1xOuZ3>3`h7wo7&!PAt' nqn.2016-06.io.spdk:cnode13526 00:13:34.172 [2024-11-20 12:27:39.811288] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13526: invalid model number '$}I&9f[idgd\itqT8p!(c1xOuZ3>3`h7wo7&!PAt' 00:13:34.172 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:34.172 { 00:13:34.172 "nqn": "nqn.2016-06.io.spdk:cnode13526", 00:13:34.172 "model_number": "$}I&9f[idgd\\itqT8p!(c1xOuZ3>3`h7wo7&!PA\u007ft", 00:13:34.172 "method": "nvmf_create_subsystem", 00:13:34.172 "req_id": 1 00:13:34.172 } 00:13:34.172 Got JSON-RPC error response 00:13:34.172 response: 00:13:34.172 { 00:13:34.172 "code": -32602, 00:13:34.172 "message": "Invalid MN $}I&9f[idgd\\itqT8p!(c1xOuZ3>3`h7wo7&!PA\u007ft" 00:13:34.172 }' 00:13:34.172 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:34.172 { 00:13:34.172 "nqn": "nqn.2016-06.io.spdk:cnode13526", 00:13:34.172 "model_number": "$}I&9f[idgd\\itqT8p!(c1xOuZ3>3`h7wo7&!PA\u007ft", 00:13:34.172 "method": "nvmf_create_subsystem", 00:13:34.172 "req_id": 1 00:13:34.172 } 00:13:34.172 Got JSON-RPC error response 00:13:34.172 response: 00:13:34.172 { 00:13:34.172 "code": -32602, 00:13:34.172 "message": "Invalid MN $}I&9f[idgd\\itqT8p!(c1xOuZ3>3`h7wo7&!PA\u007ft" 00:13:34.172 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:34.172 12:27:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:13:34.429 [2024-11-20 12:27:40.176960] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20d76b0/0x20dbba0) succeed. 00:13:34.429 [2024-11-20 12:27:40.191720] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20d8d40/0x211d240) succeed. 00:13:34.688 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:35.254 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:13:35.254 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:13:35.254 192.168.100.9' 00:13:35.254 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:35.254 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:13:35.254 12:27:40 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:13:35.512 [2024-11-20 12:27:41.043980] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:35.512 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:35.512 { 00:13:35.512 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:35.512 "listen_address": { 00:13:35.512 "trtype": "rdma", 00:13:35.512 "traddr": "192.168.100.8", 00:13:35.512 "trsvcid": "4421" 00:13:35.512 }, 00:13:35.512 "method": "nvmf_subsystem_remove_listener", 00:13:35.512 "req_id": 1 00:13:35.512 } 00:13:35.512 Got JSON-RPC error response 00:13:35.512 response: 00:13:35.512 { 00:13:35.512 "code": -32602, 00:13:35.512 "message": "Invalid parameters" 00:13:35.512 }' 00:13:35.512 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:35.512 { 00:13:35.512 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:35.512 "listen_address": { 00:13:35.512 "trtype": "rdma", 00:13:35.512 "traddr": "192.168.100.8", 00:13:35.512 "trsvcid": "4421" 00:13:35.512 }, 00:13:35.512 "method": "nvmf_subsystem_remove_listener", 00:13:35.512 "req_id": 1 00:13:35.512 } 00:13:35.512 Got JSON-RPC error response 00:13:35.512 response: 00:13:35.512 { 00:13:35.512 "code": -32602, 00:13:35.512 "message": "Invalid parameters" 00:13:35.512 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:35.512 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24515 -i 0 00:13:35.771 [2024-11-20 12:27:41.381144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24515: invalid cntlid range [0-65519] 00:13:35.771 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:35.771 { 00:13:35.771 "nqn": "nqn.2016-06.io.spdk:cnode24515", 00:13:35.771 "min_cntlid": 0, 00:13:35.771 "method": "nvmf_create_subsystem", 00:13:35.771 "req_id": 1 00:13:35.771 } 00:13:35.771 Got JSON-RPC error response 00:13:35.771 response: 00:13:35.771 { 00:13:35.771 "code": -32602, 00:13:35.771 "message": "Invalid cntlid range [0-65519]" 00:13:35.771 }' 00:13:35.771 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:35.771 { 00:13:35.771 "nqn": "nqn.2016-06.io.spdk:cnode24515", 00:13:35.771 "min_cntlid": 0, 00:13:35.771 "method": "nvmf_create_subsystem", 00:13:35.771 "req_id": 1 00:13:35.771 } 00:13:35.771 Got JSON-RPC error response 00:13:35.771 response: 00:13:35.771 { 00:13:35.771 "code": -32602, 00:13:35.771 "message": "Invalid cntlid range [0-65519]" 00:13:35.771 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.771 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6565 -i 65520 00:13:36.029 [2024-11-20 12:27:41.718342] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6565: invalid cntlid range [65520-65519] 00:13:36.029 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:36.029 { 00:13:36.029 "nqn": "nqn.2016-06.io.spdk:cnode6565", 00:13:36.029 "min_cntlid": 65520, 00:13:36.029 "method": "nvmf_create_subsystem", 00:13:36.029 "req_id": 1 00:13:36.029 } 00:13:36.029 Got JSON-RPC error response 00:13:36.029 response: 00:13:36.029 { 00:13:36.029 "code": -32602, 00:13:36.029 "message": "Invalid cntlid range [65520-65519]" 00:13:36.029 }' 00:13:36.029 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:36.029 { 00:13:36.029 "nqn": "nqn.2016-06.io.spdk:cnode6565", 00:13:36.029 "min_cntlid": 65520, 00:13:36.029 "method": "nvmf_create_subsystem", 00:13:36.029 "req_id": 1 00:13:36.029 } 00:13:36.029 Got JSON-RPC error response 00:13:36.029 response: 00:13:36.029 { 00:13:36.029 "code": -32602, 00:13:36.029 "message": "Invalid cntlid range [65520-65519]" 00:13:36.029 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.029 12:27:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12355 -I 0 00:13:36.597 [2024-11-20 12:27:42.055564] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12355: invalid cntlid range [1-0] 00:13:36.597 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:36.597 { 00:13:36.597 "nqn": "nqn.2016-06.io.spdk:cnode12355", 00:13:36.597 "max_cntlid": 0, 00:13:36.597 "method": "nvmf_create_subsystem", 00:13:36.597 "req_id": 1 00:13:36.597 } 00:13:36.597 Got JSON-RPC error response 00:13:36.597 response: 00:13:36.597 { 00:13:36.597 "code": -32602, 00:13:36.597 "message": "Invalid cntlid range [1-0]" 00:13:36.597 }' 00:13:36.597 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:36.597 { 00:13:36.597 "nqn": "nqn.2016-06.io.spdk:cnode12355", 00:13:36.597 "max_cntlid": 0, 00:13:36.597 "method": "nvmf_create_subsystem", 00:13:36.597 "req_id": 1 00:13:36.597 } 00:13:36.597 Got JSON-RPC error response 00:13:36.597 response: 00:13:36.597 { 00:13:36.597 "code": -32602, 00:13:36.597 "message": "Invalid cntlid range [1-0]" 00:13:36.597 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.597 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1976 -I 65520 00:13:36.855 [2024-11-20 12:27:42.392805] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1976: invalid cntlid range [1-65520] 00:13:36.855 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:36.855 { 00:13:36.855 "nqn": "nqn.2016-06.io.spdk:cnode1976", 00:13:36.855 "max_cntlid": 65520, 00:13:36.855 "method": "nvmf_create_subsystem", 00:13:36.855 "req_id": 1 00:13:36.855 } 00:13:36.855 Got JSON-RPC error response 00:13:36.855 response: 00:13:36.855 { 00:13:36.855 "code": -32602, 00:13:36.855 "message": "Invalid cntlid range [1-65520]" 00:13:36.855 }' 00:13:36.855 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:36.855 { 00:13:36.855 "nqn": "nqn.2016-06.io.spdk:cnode1976", 00:13:36.855 "max_cntlid": 65520, 00:13:36.855 "method": "nvmf_create_subsystem", 00:13:36.855 "req_id": 1 00:13:36.855 } 00:13:36.855 Got JSON-RPC error response 00:13:36.855 response: 00:13:36.855 { 00:13:36.855 "code": -32602, 00:13:36.855 "message": "Invalid cntlid range [1-65520]" 00:13:36.855 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.855 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10598 -i 6 -I 5 00:13:37.113 [2024-11-20 12:27:42.717951] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10598: invalid cntlid range [6-5] 00:13:37.113 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:37.113 { 00:13:37.113 "nqn": "nqn.2016-06.io.spdk:cnode10598", 00:13:37.113 "min_cntlid": 6, 00:13:37.113 "max_cntlid": 5, 00:13:37.113 "method": "nvmf_create_subsystem", 00:13:37.113 "req_id": 1 00:13:37.113 } 00:13:37.113 Got JSON-RPC error response 00:13:37.113 response: 00:13:37.113 { 00:13:37.113 "code": -32602, 00:13:37.113 "message": "Invalid cntlid range [6-5]" 00:13:37.113 }' 00:13:37.113 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:37.113 { 00:13:37.113 "nqn": "nqn.2016-06.io.spdk:cnode10598", 00:13:37.113 "min_cntlid": 6, 00:13:37.113 "max_cntlid": 5, 00:13:37.113 "method": "nvmf_create_subsystem", 00:13:37.113 "req_id": 1 00:13:37.113 } 00:13:37.113 Got JSON-RPC error response 00:13:37.113 response: 00:13:37.113 { 00:13:37.113 "code": -32602, 00:13:37.113 "message": "Invalid cntlid range [6-5]" 00:13:37.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.113 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:37.371 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:37.371 { 00:13:37.371 "name": "foobar", 00:13:37.371 "method": "nvmf_delete_target", 00:13:37.371 "req_id": 1 00:13:37.371 } 00:13:37.371 Got JSON-RPC error response 00:13:37.371 response: 00:13:37.371 { 00:13:37.371 "code": -32602, 00:13:37.371 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:37.371 }' 00:13:37.371 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:37.372 { 00:13:37.372 "name": "foobar", 00:13:37.372 "method": "nvmf_delete_target", 00:13:37.372 "req_id": 1 00:13:37.372 } 00:13:37.372 Got JSON-RPC error response 00:13:37.372 response: 00:13:37.372 { 00:13:37.372 "code": -32602, 00:13:37.372 "message": "The specified target doesn't exist, cannot delete it." 00:13:37.372 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:37.372 rmmod nvme_rdma 00:13:37.372 rmmod nvme_fabrics 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2753393 ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2753393 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2753393 ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2753393 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753393 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753393' 00:13:37.372 killing process with pid 2753393 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2753393 00:13:37.372 12:27:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2753393 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:37.631 00:13:37.631 real 0m8.446s 00:13:37.631 user 0m26.736s 00:13:37.631 sys 0m2.912s 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 ************************************ 00:13:37.631 END TEST nvmf_invalid 00:13:37.631 ************************************ 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 ************************************ 00:13:37.631 START TEST nvmf_connect_stress 00:13:37.631 ************************************ 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:37.631 * Looking for test storage... 00:13:37.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:37.631 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:37.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.891 --rc genhtml_branch_coverage=1 00:13:37.891 --rc genhtml_function_coverage=1 00:13:37.891 --rc genhtml_legend=1 00:13:37.891 --rc geninfo_all_blocks=1 00:13:37.891 --rc geninfo_unexecuted_blocks=1 00:13:37.891 00:13:37.891 ' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:37.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.891 --rc genhtml_branch_coverage=1 00:13:37.891 --rc genhtml_function_coverage=1 00:13:37.891 --rc genhtml_legend=1 00:13:37.891 --rc geninfo_all_blocks=1 00:13:37.891 --rc geninfo_unexecuted_blocks=1 00:13:37.891 00:13:37.891 ' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:37.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.891 --rc genhtml_branch_coverage=1 00:13:37.891 --rc genhtml_function_coverage=1 00:13:37.891 --rc genhtml_legend=1 00:13:37.891 --rc geninfo_all_blocks=1 00:13:37.891 --rc geninfo_unexecuted_blocks=1 00:13:37.891 00:13:37.891 ' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:37.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.891 --rc genhtml_branch_coverage=1 00:13:37.891 --rc genhtml_function_coverage=1 00:13:37.891 --rc genhtml_legend=1 00:13:37.891 --rc geninfo_all_blocks=1 00:13:37.891 --rc geninfo_unexecuted_blocks=1 00:13:37.891 00:13:37.891 ' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.891 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.892 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.892 12:27:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:13:40.427 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:13:40.427 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:13:40.427 Found net devices under 0000:83:00.0: mlx_0_0 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:13:40.427 Found net devices under 0000:83:00.1: mlx_0_1 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:13:40.427 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:40.428 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.428 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:13:40.428 altname enp131s0f0np0 00:13:40.428 inet 192.168.100.8/24 scope global mlx_0_0 00:13:40.428 valid_lft forever preferred_lft forever 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:40.428 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.428 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:13:40.428 altname enp131s0f1np1 00:13:40.428 inet 192.168.100.9/24 scope global mlx_0_1 00:13:40.428 valid_lft forever preferred_lft forever 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:40.428 192.168.100.9' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:40.428 192.168.100.9' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:40.428 192.168.100.9' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.428 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2755406 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2755406 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2755406 ']' 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.429 12:27:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.429 [2024-11-20 12:27:45.947830] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:40.429 [2024-11-20 12:27:45.948000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.429 [2024-11-20 12:27:46.027949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.429 [2024-11-20 12:27:46.089812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.429 [2024-11-20 12:27:46.089865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.429 [2024-11-20 12:27:46.089880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.429 [2024-11-20 12:27:46.089892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.429 [2024-11-20 12:27:46.089904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.429 [2024-11-20 12:27:46.091062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.429 [2024-11-20 12:27:46.091117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.429 [2024-11-20 12:27:46.091137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.687 [2024-11-20 12:27:46.269059] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a5b590/0x1a5fa80) succeed. 00:13:40.687 [2024-11-20 12:27:46.284091] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a5cb80/0x1aa1120) succeed. 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.687 [2024-11-20 12:27:46.439704] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:40.687 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.688 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.946 NULL1 00:13:40.946 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2755459 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.947 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.206 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.206 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:41.206 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.206 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.206 12:27:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.464 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.464 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:41.464 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.464 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.464 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.031 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.031 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:42.031 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.031 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.031 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.289 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.289 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:42.289 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.289 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.289 12:27:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.547 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.547 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:42.547 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.547 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.547 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.805 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.805 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:42.805 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.805 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.805 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.063 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.063 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:43.063 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.063 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.063 12:27:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.631 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.631 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:43.631 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.631 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.631 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.889 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.889 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:43.889 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.889 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.889 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.147 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.147 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:44.147 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.147 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.147 12:27:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.406 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.406 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:44.406 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.406 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.406 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.972 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.972 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:44.972 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.972 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.972 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.230 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.230 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:45.230 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.230 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.230 12:27:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.488 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.488 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:45.488 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.488 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.488 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.746 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.746 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:45.746 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.746 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.746 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.312 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.312 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:46.312 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.312 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.312 12:27:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.570 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.570 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:46.570 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.570 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.570 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.828 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.828 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:46.828 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.828 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.828 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.086 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.086 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:47.086 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.086 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.086 12:27:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.344 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.344 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:47.344 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.344 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.344 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.910 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.910 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:47.910 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.910 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.910 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.169 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:48.169 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.169 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.169 12:27:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.427 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.427 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:48.427 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.427 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.427 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.686 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.686 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:48.686 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.686 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.686 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.253 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.253 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:49.253 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.253 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.253 12:27:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.511 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.511 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:49.511 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.511 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.511 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.769 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.769 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:49.769 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.769 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.769 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.027 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.027 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:50.027 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.027 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.027 12:27:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.287 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.287 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:50.287 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.287 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.287 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.853 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.853 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:50.853 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.853 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.853 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.853 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2755459 00:13:51.111 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2755459) - No such process 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2755459 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:51.111 rmmod nvme_rdma 00:13:51.111 rmmod nvme_fabrics 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2755406 ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2755406 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2755406 ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2755406 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755406 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:51.111 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755406' 00:13:51.111 killing process with pid 2755406 00:13:51.112 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2755406 00:13:51.112 12:27:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2755406 00:13:51.372 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.372 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:51.372 00:13:51.372 real 0m13.823s 00:13:51.372 user 0m39.213s 00:13:51.372 sys 0m3.776s 00:13:51.372 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.372 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.372 ************************************ 00:13:51.372 END TEST nvmf_connect_stress 00:13:51.372 ************************************ 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.634 ************************************ 00:13:51.634 START TEST nvmf_fused_ordering 00:13:51.634 ************************************ 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:51.634 * Looking for test storage... 00:13:51.634 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.634 --rc genhtml_branch_coverage=1 00:13:51.634 --rc genhtml_function_coverage=1 00:13:51.634 --rc genhtml_legend=1 00:13:51.634 --rc geninfo_all_blocks=1 00:13:51.634 --rc geninfo_unexecuted_blocks=1 00:13:51.634 00:13:51.634 ' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.634 --rc genhtml_branch_coverage=1 00:13:51.634 --rc genhtml_function_coverage=1 00:13:51.634 --rc genhtml_legend=1 00:13:51.634 --rc geninfo_all_blocks=1 00:13:51.634 --rc geninfo_unexecuted_blocks=1 00:13:51.634 00:13:51.634 ' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.634 --rc genhtml_branch_coverage=1 00:13:51.634 --rc genhtml_function_coverage=1 00:13:51.634 --rc genhtml_legend=1 00:13:51.634 --rc geninfo_all_blocks=1 00:13:51.634 --rc geninfo_unexecuted_blocks=1 00:13:51.634 00:13:51.634 ' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:51.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.634 --rc genhtml_branch_coverage=1 00:13:51.634 --rc genhtml_function_coverage=1 00:13:51.634 --rc genhtml_legend=1 00:13:51.634 --rc geninfo_all_blocks=1 00:13:51.634 --rc geninfo_unexecuted_blocks=1 00:13:51.634 00:13:51.634 ' 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.634 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.635 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.635 12:27:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:13:54.250 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:13:54.250 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:13:54.250 Found net devices under 0000:83:00.0: mlx_0_0 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:13:54.250 Found net devices under 0000:83:00.1: mlx_0_1 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:54.250 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:54.251 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.251 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:13:54.251 altname enp131s0f0np0 00:13:54.251 inet 192.168.100.8/24 scope global mlx_0_0 00:13:54.251 valid_lft forever preferred_lft forever 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:54.251 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.251 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:13:54.251 altname enp131s0f1np1 00:13:54.251 inet 192.168.100.9/24 scope global mlx_0_1 00:13:54.251 valid_lft forever preferred_lft forever 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:54.251 192.168.100.9' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:54.251 192.168.100.9' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:54.251 192.168.100.9' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:54.251 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2757732 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2757732 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2757732 ']' 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.252 12:27:59 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.252 [2024-11-20 12:27:59.789196] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:54.252 [2024-11-20 12:27:59.789287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.252 [2024-11-20 12:27:59.904313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.252 [2024-11-20 12:28:00.011342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.252 [2024-11-20 12:28:00.011436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.252 [2024-11-20 12:28:00.011471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.252 [2024-11-20 12:28:00.011518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.252 [2024-11-20 12:28:00.011553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.252 [2024-11-20 12:28:00.012513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.512 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 [2024-11-20 12:28:00.280725] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a5cb0/0x7aa1a0) succeed. 00:13:54.771 [2024-11-20 12:28:00.309519] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a7160/0x7eb840) succeed. 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 [2024-11-20 12:28:00.405160] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 NULL1 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.771 12:28:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:54.771 [2024-11-20 12:28:00.462615] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:54.771 [2024-11-20 12:28:00.462675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757759 ] 00:13:55.030 Attached to nqn.2016-06.io.spdk:cnode1 00:13:55.030 Namespace ID: 1 size: 1GB 00:13:55.030 fused_ordering(0) 00:13:55.030 fused_ordering(1) 00:13:55.030 fused_ordering(2) 00:13:55.030 fused_ordering(3) 00:13:55.030 fused_ordering(4) 00:13:55.030 fused_ordering(5) 00:13:55.030 fused_ordering(6) 00:13:55.030 fused_ordering(7) 00:13:55.030 fused_ordering(8) 00:13:55.030 fused_ordering(9) 00:13:55.030 fused_ordering(10) 00:13:55.030 fused_ordering(11) 00:13:55.030 fused_ordering(12) 00:13:55.030 fused_ordering(13) 00:13:55.030 fused_ordering(14) 00:13:55.030 fused_ordering(15) 00:13:55.030 fused_ordering(16) 00:13:55.030 fused_ordering(17) 00:13:55.030 fused_ordering(18) 00:13:55.030 fused_ordering(19) 00:13:55.030 fused_ordering(20) 00:13:55.030 fused_ordering(21) 00:13:55.030 fused_ordering(22) 00:13:55.030 fused_ordering(23) 00:13:55.030 fused_ordering(24) 00:13:55.030 fused_ordering(25) 00:13:55.030 fused_ordering(26) 00:13:55.030 fused_ordering(27) 00:13:55.030 fused_ordering(28) 00:13:55.030 fused_ordering(29) 00:13:55.030 fused_ordering(30) 00:13:55.030 fused_ordering(31) 00:13:55.030 fused_ordering(32) 00:13:55.030 fused_ordering(33) 00:13:55.030 fused_ordering(34) 00:13:55.030 fused_ordering(35) 00:13:55.030 fused_ordering(36) 00:13:55.030 fused_ordering(37) 00:13:55.030 fused_ordering(38) 00:13:55.030 fused_ordering(39) 00:13:55.030 fused_ordering(40) 00:13:55.030 fused_ordering(41) 00:13:55.030 fused_ordering(42) 00:13:55.030 fused_ordering(43) 00:13:55.030 fused_ordering(44) 00:13:55.030 fused_ordering(45) 00:13:55.030 fused_ordering(46) 00:13:55.030 fused_ordering(47) 00:13:55.030 fused_ordering(48) 00:13:55.030 fused_ordering(49) 00:13:55.030 fused_ordering(50) 00:13:55.030 fused_ordering(51) 00:13:55.030 fused_ordering(52) 00:13:55.030 fused_ordering(53) 00:13:55.030 fused_ordering(54) 00:13:55.030 fused_ordering(55) 00:13:55.030 fused_ordering(56) 00:13:55.030 fused_ordering(57) 00:13:55.030 fused_ordering(58) 00:13:55.030 fused_ordering(59) 00:13:55.030 fused_ordering(60) 00:13:55.030 fused_ordering(61) 00:13:55.030 fused_ordering(62) 00:13:55.030 fused_ordering(63) 00:13:55.030 fused_ordering(64) 00:13:55.030 fused_ordering(65) 00:13:55.030 fused_ordering(66) 00:13:55.030 fused_ordering(67) 00:13:55.030 fused_ordering(68) 00:13:55.030 fused_ordering(69) 00:13:55.030 fused_ordering(70) 00:13:55.030 fused_ordering(71) 00:13:55.030 fused_ordering(72) 00:13:55.030 fused_ordering(73) 00:13:55.030 fused_ordering(74) 00:13:55.030 fused_ordering(75) 00:13:55.030 fused_ordering(76) 00:13:55.030 fused_ordering(77) 00:13:55.030 fused_ordering(78) 00:13:55.030 fused_ordering(79) 00:13:55.030 fused_ordering(80) 00:13:55.030 fused_ordering(81) 00:13:55.030 fused_ordering(82) 00:13:55.030 fused_ordering(83) 00:13:55.030 fused_ordering(84) 00:13:55.030 fused_ordering(85) 00:13:55.030 fused_ordering(86) 00:13:55.030 fused_ordering(87) 00:13:55.030 fused_ordering(88) 00:13:55.030 fused_ordering(89) 00:13:55.030 fused_ordering(90) 00:13:55.030 fused_ordering(91) 00:13:55.030 fused_ordering(92) 00:13:55.030 fused_ordering(93) 00:13:55.030 fused_ordering(94) 00:13:55.030 fused_ordering(95) 00:13:55.030 fused_ordering(96) 00:13:55.030 fused_ordering(97) 00:13:55.030 fused_ordering(98) 00:13:55.030 fused_ordering(99) 00:13:55.030 fused_ordering(100) 00:13:55.030 fused_ordering(101) 00:13:55.030 fused_ordering(102) 00:13:55.030 fused_ordering(103) 00:13:55.030 fused_ordering(104) 00:13:55.030 fused_ordering(105) 00:13:55.030 fused_ordering(106) 00:13:55.030 fused_ordering(107) 00:13:55.030 fused_ordering(108) 00:13:55.030 fused_ordering(109) 00:13:55.030 fused_ordering(110) 00:13:55.030 fused_ordering(111) 00:13:55.030 fused_ordering(112) 00:13:55.030 fused_ordering(113) 00:13:55.030 fused_ordering(114) 00:13:55.030 fused_ordering(115) 00:13:55.030 fused_ordering(116) 00:13:55.030 fused_ordering(117) 00:13:55.030 fused_ordering(118) 00:13:55.030 fused_ordering(119) 00:13:55.030 fused_ordering(120) 00:13:55.030 fused_ordering(121) 00:13:55.030 fused_ordering(122) 00:13:55.030 fused_ordering(123) 00:13:55.030 fused_ordering(124) 00:13:55.030 fused_ordering(125) 00:13:55.030 fused_ordering(126) 00:13:55.030 fused_ordering(127) 00:13:55.030 fused_ordering(128) 00:13:55.030 fused_ordering(129) 00:13:55.030 fused_ordering(130) 00:13:55.030 fused_ordering(131) 00:13:55.030 fused_ordering(132) 00:13:55.030 fused_ordering(133) 00:13:55.031 fused_ordering(134) 00:13:55.031 fused_ordering(135) 00:13:55.031 fused_ordering(136) 00:13:55.031 fused_ordering(137) 00:13:55.031 fused_ordering(138) 00:13:55.031 fused_ordering(139) 00:13:55.031 fused_ordering(140) 00:13:55.031 fused_ordering(141) 00:13:55.031 fused_ordering(142) 00:13:55.031 fused_ordering(143) 00:13:55.031 fused_ordering(144) 00:13:55.031 fused_ordering(145) 00:13:55.031 fused_ordering(146) 00:13:55.031 fused_ordering(147) 00:13:55.031 fused_ordering(148) 00:13:55.031 fused_ordering(149) 00:13:55.031 fused_ordering(150) 00:13:55.031 fused_ordering(151) 00:13:55.031 fused_ordering(152) 00:13:55.031 fused_ordering(153) 00:13:55.031 fused_ordering(154) 00:13:55.031 fused_ordering(155) 00:13:55.031 fused_ordering(156) 00:13:55.031 fused_ordering(157) 00:13:55.031 fused_ordering(158) 00:13:55.031 fused_ordering(159) 00:13:55.031 fused_ordering(160) 00:13:55.031 fused_ordering(161) 00:13:55.031 fused_ordering(162) 00:13:55.031 fused_ordering(163) 00:13:55.031 fused_ordering(164) 00:13:55.031 fused_ordering(165) 00:13:55.031 fused_ordering(166) 00:13:55.031 fused_ordering(167) 00:13:55.031 fused_ordering(168) 00:13:55.031 fused_ordering(169) 00:13:55.031 fused_ordering(170) 00:13:55.031 fused_ordering(171) 00:13:55.031 fused_ordering(172) 00:13:55.031 fused_ordering(173) 00:13:55.031 fused_ordering(174) 00:13:55.031 fused_ordering(175) 00:13:55.031 fused_ordering(176) 00:13:55.031 fused_ordering(177) 00:13:55.031 fused_ordering(178) 00:13:55.031 fused_ordering(179) 00:13:55.031 fused_ordering(180) 00:13:55.031 fused_ordering(181) 00:13:55.031 fused_ordering(182) 00:13:55.031 fused_ordering(183) 00:13:55.031 fused_ordering(184) 00:13:55.031 fused_ordering(185) 00:13:55.031 fused_ordering(186) 00:13:55.031 fused_ordering(187) 00:13:55.031 fused_ordering(188) 00:13:55.031 fused_ordering(189) 00:13:55.031 fused_ordering(190) 00:13:55.031 fused_ordering(191) 00:13:55.031 fused_ordering(192) 00:13:55.031 fused_ordering(193) 00:13:55.031 fused_ordering(194) 00:13:55.031 fused_ordering(195) 00:13:55.031 fused_ordering(196) 00:13:55.031 fused_ordering(197) 00:13:55.031 fused_ordering(198) 00:13:55.031 fused_ordering(199) 00:13:55.031 fused_ordering(200) 00:13:55.031 fused_ordering(201) 00:13:55.031 fused_ordering(202) 00:13:55.031 fused_ordering(203) 00:13:55.031 fused_ordering(204) 00:13:55.031 fused_ordering(205) 00:13:55.290 fused_ordering(206) 00:13:55.290 fused_ordering(207) 00:13:55.290 fused_ordering(208) 00:13:55.290 fused_ordering(209) 00:13:55.290 fused_ordering(210) 00:13:55.290 fused_ordering(211) 00:13:55.290 fused_ordering(212) 00:13:55.290 fused_ordering(213) 00:13:55.290 fused_ordering(214) 00:13:55.290 fused_ordering(215) 00:13:55.290 fused_ordering(216) 00:13:55.290 fused_ordering(217) 00:13:55.290 fused_ordering(218) 00:13:55.290 fused_ordering(219) 00:13:55.290 fused_ordering(220) 00:13:55.290 fused_ordering(221) 00:13:55.290 fused_ordering(222) 00:13:55.290 fused_ordering(223) 00:13:55.290 fused_ordering(224) 00:13:55.290 fused_ordering(225) 00:13:55.290 fused_ordering(226) 00:13:55.290 fused_ordering(227) 00:13:55.290 fused_ordering(228) 00:13:55.290 fused_ordering(229) 00:13:55.290 fused_ordering(230) 00:13:55.290 fused_ordering(231) 00:13:55.290 fused_ordering(232) 00:13:55.290 fused_ordering(233) 00:13:55.290 fused_ordering(234) 00:13:55.290 fused_ordering(235) 00:13:55.290 fused_ordering(236) 00:13:55.290 fused_ordering(237) 00:13:55.290 fused_ordering(238) 00:13:55.290 fused_ordering(239) 00:13:55.290 fused_ordering(240) 00:13:55.290 fused_ordering(241) 00:13:55.290 fused_ordering(242) 00:13:55.290 fused_ordering(243) 00:13:55.290 fused_ordering(244) 00:13:55.290 fused_ordering(245) 00:13:55.290 fused_ordering(246) 00:13:55.290 fused_ordering(247) 00:13:55.290 fused_ordering(248) 00:13:55.290 fused_ordering(249) 00:13:55.290 fused_ordering(250) 00:13:55.290 fused_ordering(251) 00:13:55.290 fused_ordering(252) 00:13:55.290 fused_ordering(253) 00:13:55.290 fused_ordering(254) 00:13:55.290 fused_ordering(255) 00:13:55.290 fused_ordering(256) 00:13:55.290 fused_ordering(257) 00:13:55.290 fused_ordering(258) 00:13:55.290 fused_ordering(259) 00:13:55.290 fused_ordering(260) 00:13:55.290 fused_ordering(261) 00:13:55.290 fused_ordering(262) 00:13:55.290 fused_ordering(263) 00:13:55.290 fused_ordering(264) 00:13:55.290 fused_ordering(265) 00:13:55.290 fused_ordering(266) 00:13:55.290 fused_ordering(267) 00:13:55.290 fused_ordering(268) 00:13:55.290 fused_ordering(269) 00:13:55.290 fused_ordering(270) 00:13:55.290 fused_ordering(271) 00:13:55.290 fused_ordering(272) 00:13:55.290 fused_ordering(273) 00:13:55.290 fused_ordering(274) 00:13:55.290 fused_ordering(275) 00:13:55.290 fused_ordering(276) 00:13:55.290 fused_ordering(277) 00:13:55.290 fused_ordering(278) 00:13:55.290 fused_ordering(279) 00:13:55.290 fused_ordering(280) 00:13:55.290 fused_ordering(281) 00:13:55.290 fused_ordering(282) 00:13:55.290 fused_ordering(283) 00:13:55.290 fused_ordering(284) 00:13:55.290 fused_ordering(285) 00:13:55.290 fused_ordering(286) 00:13:55.290 fused_ordering(287) 00:13:55.290 fused_ordering(288) 00:13:55.290 fused_ordering(289) 00:13:55.290 fused_ordering(290) 00:13:55.290 fused_ordering(291) 00:13:55.290 fused_ordering(292) 00:13:55.290 fused_ordering(293) 00:13:55.290 fused_ordering(294) 00:13:55.290 fused_ordering(295) 00:13:55.290 fused_ordering(296) 00:13:55.290 fused_ordering(297) 00:13:55.290 fused_ordering(298) 00:13:55.290 fused_ordering(299) 00:13:55.290 fused_ordering(300) 00:13:55.290 fused_ordering(301) 00:13:55.290 fused_ordering(302) 00:13:55.290 fused_ordering(303) 00:13:55.290 fused_ordering(304) 00:13:55.290 fused_ordering(305) 00:13:55.290 fused_ordering(306) 00:13:55.290 fused_ordering(307) 00:13:55.290 fused_ordering(308) 00:13:55.290 fused_ordering(309) 00:13:55.290 fused_ordering(310) 00:13:55.290 fused_ordering(311) 00:13:55.290 fused_ordering(312) 00:13:55.290 fused_ordering(313) 00:13:55.290 fused_ordering(314) 00:13:55.290 fused_ordering(315) 00:13:55.290 fused_ordering(316) 00:13:55.290 fused_ordering(317) 00:13:55.291 fused_ordering(318) 00:13:55.291 fused_ordering(319) 00:13:55.291 fused_ordering(320) 00:13:55.291 fused_ordering(321) 00:13:55.291 fused_ordering(322) 00:13:55.291 fused_ordering(323) 00:13:55.291 fused_ordering(324) 00:13:55.291 fused_ordering(325) 00:13:55.291 fused_ordering(326) 00:13:55.291 fused_ordering(327) 00:13:55.291 fused_ordering(328) 00:13:55.291 fused_ordering(329) 00:13:55.291 fused_ordering(330) 00:13:55.291 fused_ordering(331) 00:13:55.291 fused_ordering(332) 00:13:55.291 fused_ordering(333) 00:13:55.291 fused_ordering(334) 00:13:55.291 fused_ordering(335) 00:13:55.291 fused_ordering(336) 00:13:55.291 fused_ordering(337) 00:13:55.291 fused_ordering(338) 00:13:55.291 fused_ordering(339) 00:13:55.291 fused_ordering(340) 00:13:55.291 fused_ordering(341) 00:13:55.291 fused_ordering(342) 00:13:55.291 fused_ordering(343) 00:13:55.291 fused_ordering(344) 00:13:55.291 fused_ordering(345) 00:13:55.291 fused_ordering(346) 00:13:55.291 fused_ordering(347) 00:13:55.291 fused_ordering(348) 00:13:55.291 fused_ordering(349) 00:13:55.291 fused_ordering(350) 00:13:55.291 fused_ordering(351) 00:13:55.291 fused_ordering(352) 00:13:55.291 fused_ordering(353) 00:13:55.291 fused_ordering(354) 00:13:55.291 fused_ordering(355) 00:13:55.291 fused_ordering(356) 00:13:55.291 fused_ordering(357) 00:13:55.291 fused_ordering(358) 00:13:55.291 fused_ordering(359) 00:13:55.291 fused_ordering(360) 00:13:55.291 fused_ordering(361) 00:13:55.291 fused_ordering(362) 00:13:55.291 fused_ordering(363) 00:13:55.291 fused_ordering(364) 00:13:55.291 fused_ordering(365) 00:13:55.291 fused_ordering(366) 00:13:55.291 fused_ordering(367) 00:13:55.291 fused_ordering(368) 00:13:55.291 fused_ordering(369) 00:13:55.291 fused_ordering(370) 00:13:55.291 fused_ordering(371) 00:13:55.291 fused_ordering(372) 00:13:55.291 fused_ordering(373) 00:13:55.291 fused_ordering(374) 00:13:55.291 fused_ordering(375) 00:13:55.291 fused_ordering(376) 00:13:55.291 fused_ordering(377) 00:13:55.291 fused_ordering(378) 00:13:55.291 fused_ordering(379) 00:13:55.291 fused_ordering(380) 00:13:55.291 fused_ordering(381) 00:13:55.291 fused_ordering(382) 00:13:55.291 fused_ordering(383) 00:13:55.291 fused_ordering(384) 00:13:55.291 fused_ordering(385) 00:13:55.291 fused_ordering(386) 00:13:55.291 fused_ordering(387) 00:13:55.291 fused_ordering(388) 00:13:55.291 fused_ordering(389) 00:13:55.291 fused_ordering(390) 00:13:55.291 fused_ordering(391) 00:13:55.291 fused_ordering(392) 00:13:55.291 fused_ordering(393) 00:13:55.291 fused_ordering(394) 00:13:55.291 fused_ordering(395) 00:13:55.291 fused_ordering(396) 00:13:55.291 fused_ordering(397) 00:13:55.291 fused_ordering(398) 00:13:55.291 fused_ordering(399) 00:13:55.291 fused_ordering(400) 00:13:55.291 fused_ordering(401) 00:13:55.291 fused_ordering(402) 00:13:55.291 fused_ordering(403) 00:13:55.291 fused_ordering(404) 00:13:55.291 fused_ordering(405) 00:13:55.291 fused_ordering(406) 00:13:55.291 fused_ordering(407) 00:13:55.291 fused_ordering(408) 00:13:55.291 fused_ordering(409) 00:13:55.291 fused_ordering(410) 00:13:55.549 fused_ordering(411) 00:13:55.549 fused_ordering(412) 00:13:55.549 fused_ordering(413) 00:13:55.549 fused_ordering(414) 00:13:55.549 fused_ordering(415) 00:13:55.549 fused_ordering(416) 00:13:55.549 fused_ordering(417) 00:13:55.549 fused_ordering(418) 00:13:55.549 fused_ordering(419) 00:13:55.549 fused_ordering(420) 00:13:55.549 fused_ordering(421) 00:13:55.549 fused_ordering(422) 00:13:55.549 fused_ordering(423) 00:13:55.549 fused_ordering(424) 00:13:55.549 fused_ordering(425) 00:13:55.549 fused_ordering(426) 00:13:55.549 fused_ordering(427) 00:13:55.549 fused_ordering(428) 00:13:55.549 fused_ordering(429) 00:13:55.549 fused_ordering(430) 00:13:55.549 fused_ordering(431) 00:13:55.549 fused_ordering(432) 00:13:55.549 fused_ordering(433) 00:13:55.549 fused_ordering(434) 00:13:55.549 fused_ordering(435) 00:13:55.549 fused_ordering(436) 00:13:55.549 fused_ordering(437) 00:13:55.549 fused_ordering(438) 00:13:55.549 fused_ordering(439) 00:13:55.549 fused_ordering(440) 00:13:55.549 fused_ordering(441) 00:13:55.549 fused_ordering(442) 00:13:55.549 fused_ordering(443) 00:13:55.549 fused_ordering(444) 00:13:55.549 fused_ordering(445) 00:13:55.549 fused_ordering(446) 00:13:55.549 fused_ordering(447) 00:13:55.549 fused_ordering(448) 00:13:55.549 fused_ordering(449) 00:13:55.549 fused_ordering(450) 00:13:55.549 fused_ordering(451) 00:13:55.549 fused_ordering(452) 00:13:55.549 fused_ordering(453) 00:13:55.549 fused_ordering(454) 00:13:55.549 fused_ordering(455) 00:13:55.549 fused_ordering(456) 00:13:55.549 fused_ordering(457) 00:13:55.549 fused_ordering(458) 00:13:55.549 fused_ordering(459) 00:13:55.549 fused_ordering(460) 00:13:55.549 fused_ordering(461) 00:13:55.549 fused_ordering(462) 00:13:55.549 fused_ordering(463) 00:13:55.549 fused_ordering(464) 00:13:55.549 fused_ordering(465) 00:13:55.549 fused_ordering(466) 00:13:55.549 fused_ordering(467) 00:13:55.549 fused_ordering(468) 00:13:55.549 fused_ordering(469) 00:13:55.549 fused_ordering(470) 00:13:55.549 fused_ordering(471) 00:13:55.549 fused_ordering(472) 00:13:55.549 fused_ordering(473) 00:13:55.549 fused_ordering(474) 00:13:55.549 fused_ordering(475) 00:13:55.549 fused_ordering(476) 00:13:55.549 fused_ordering(477) 00:13:55.549 fused_ordering(478) 00:13:55.549 fused_ordering(479) 00:13:55.549 fused_ordering(480) 00:13:55.549 fused_ordering(481) 00:13:55.549 fused_ordering(482) 00:13:55.549 fused_ordering(483) 00:13:55.549 fused_ordering(484) 00:13:55.549 fused_ordering(485) 00:13:55.549 fused_ordering(486) 00:13:55.549 fused_ordering(487) 00:13:55.549 fused_ordering(488) 00:13:55.549 fused_ordering(489) 00:13:55.549 fused_ordering(490) 00:13:55.549 fused_ordering(491) 00:13:55.549 fused_ordering(492) 00:13:55.549 fused_ordering(493) 00:13:55.549 fused_ordering(494) 00:13:55.549 fused_ordering(495) 00:13:55.549 fused_ordering(496) 00:13:55.549 fused_ordering(497) 00:13:55.549 fused_ordering(498) 00:13:55.549 fused_ordering(499) 00:13:55.549 fused_ordering(500) 00:13:55.549 fused_ordering(501) 00:13:55.549 fused_ordering(502) 00:13:55.549 fused_ordering(503) 00:13:55.549 fused_ordering(504) 00:13:55.549 fused_ordering(505) 00:13:55.549 fused_ordering(506) 00:13:55.549 fused_ordering(507) 00:13:55.549 fused_ordering(508) 00:13:55.549 fused_ordering(509) 00:13:55.549 fused_ordering(510) 00:13:55.549 fused_ordering(511) 00:13:55.549 fused_ordering(512) 00:13:55.549 fused_ordering(513) 00:13:55.549 fused_ordering(514) 00:13:55.549 fused_ordering(515) 00:13:55.549 fused_ordering(516) 00:13:55.549 fused_ordering(517) 00:13:55.549 fused_ordering(518) 00:13:55.549 fused_ordering(519) 00:13:55.549 fused_ordering(520) 00:13:55.549 fused_ordering(521) 00:13:55.549 fused_ordering(522) 00:13:55.549 fused_ordering(523) 00:13:55.549 fused_ordering(524) 00:13:55.549 fused_ordering(525) 00:13:55.549 fused_ordering(526) 00:13:55.549 fused_ordering(527) 00:13:55.549 fused_ordering(528) 00:13:55.549 fused_ordering(529) 00:13:55.549 fused_ordering(530) 00:13:55.549 fused_ordering(531) 00:13:55.549 fused_ordering(532) 00:13:55.549 fused_ordering(533) 00:13:55.549 fused_ordering(534) 00:13:55.549 fused_ordering(535) 00:13:55.549 fused_ordering(536) 00:13:55.549 fused_ordering(537) 00:13:55.549 fused_ordering(538) 00:13:55.549 fused_ordering(539) 00:13:55.549 fused_ordering(540) 00:13:55.549 fused_ordering(541) 00:13:55.549 fused_ordering(542) 00:13:55.549 fused_ordering(543) 00:13:55.549 fused_ordering(544) 00:13:55.549 fused_ordering(545) 00:13:55.549 fused_ordering(546) 00:13:55.549 fused_ordering(547) 00:13:55.549 fused_ordering(548) 00:13:55.549 fused_ordering(549) 00:13:55.549 fused_ordering(550) 00:13:55.549 fused_ordering(551) 00:13:55.549 fused_ordering(552) 00:13:55.549 fused_ordering(553) 00:13:55.549 fused_ordering(554) 00:13:55.549 fused_ordering(555) 00:13:55.549 fused_ordering(556) 00:13:55.549 fused_ordering(557) 00:13:55.549 fused_ordering(558) 00:13:55.549 fused_ordering(559) 00:13:55.549 fused_ordering(560) 00:13:55.549 fused_ordering(561) 00:13:55.549 fused_ordering(562) 00:13:55.549 fused_ordering(563) 00:13:55.549 fused_ordering(564) 00:13:55.549 fused_ordering(565) 00:13:55.549 fused_ordering(566) 00:13:55.549 fused_ordering(567) 00:13:55.549 fused_ordering(568) 00:13:55.549 fused_ordering(569) 00:13:55.549 fused_ordering(570) 00:13:55.549 fused_ordering(571) 00:13:55.549 fused_ordering(572) 00:13:55.549 fused_ordering(573) 00:13:55.549 fused_ordering(574) 00:13:55.549 fused_ordering(575) 00:13:55.549 fused_ordering(576) 00:13:55.549 fused_ordering(577) 00:13:55.549 fused_ordering(578) 00:13:55.549 fused_ordering(579) 00:13:55.549 fused_ordering(580) 00:13:55.549 fused_ordering(581) 00:13:55.549 fused_ordering(582) 00:13:55.549 fused_ordering(583) 00:13:55.549 fused_ordering(584) 00:13:55.549 fused_ordering(585) 00:13:55.549 fused_ordering(586) 00:13:55.549 fused_ordering(587) 00:13:55.549 fused_ordering(588) 00:13:55.549 fused_ordering(589) 00:13:55.549 fused_ordering(590) 00:13:55.549 fused_ordering(591) 00:13:55.549 fused_ordering(592) 00:13:55.549 fused_ordering(593) 00:13:55.549 fused_ordering(594) 00:13:55.549 fused_ordering(595) 00:13:55.549 fused_ordering(596) 00:13:55.549 fused_ordering(597) 00:13:55.549 fused_ordering(598) 00:13:55.549 fused_ordering(599) 00:13:55.549 fused_ordering(600) 00:13:55.549 fused_ordering(601) 00:13:55.549 fused_ordering(602) 00:13:55.549 fused_ordering(603) 00:13:55.549 fused_ordering(604) 00:13:55.549 fused_ordering(605) 00:13:55.549 fused_ordering(606) 00:13:55.549 fused_ordering(607) 00:13:55.549 fused_ordering(608) 00:13:55.549 fused_ordering(609) 00:13:55.549 fused_ordering(610) 00:13:55.549 fused_ordering(611) 00:13:55.549 fused_ordering(612) 00:13:55.549 fused_ordering(613) 00:13:55.549 fused_ordering(614) 00:13:55.549 fused_ordering(615) 00:13:55.807 fused_ordering(616) 00:13:55.807 fused_ordering(617) 00:13:55.808 fused_ordering(618) 00:13:55.808 fused_ordering(619) 00:13:55.808 fused_ordering(620) 00:13:55.808 fused_ordering(621) 00:13:55.808 fused_ordering(622) 00:13:55.808 fused_ordering(623) 00:13:55.808 fused_ordering(624) 00:13:55.808 fused_ordering(625) 00:13:55.808 fused_ordering(626) 00:13:55.808 fused_ordering(627) 00:13:55.808 fused_ordering(628) 00:13:55.808 fused_ordering(629) 00:13:55.808 fused_ordering(630) 00:13:55.808 fused_ordering(631) 00:13:55.808 fused_ordering(632) 00:13:55.808 fused_ordering(633) 00:13:55.808 fused_ordering(634) 00:13:55.808 fused_ordering(635) 00:13:55.808 fused_ordering(636) 00:13:55.808 fused_ordering(637) 00:13:55.808 fused_ordering(638) 00:13:55.808 fused_ordering(639) 00:13:55.808 fused_ordering(640) 00:13:55.808 fused_ordering(641) 00:13:55.808 fused_ordering(642) 00:13:55.808 fused_ordering(643) 00:13:55.808 fused_ordering(644) 00:13:55.808 fused_ordering(645) 00:13:55.808 fused_ordering(646) 00:13:55.808 fused_ordering(647) 00:13:55.808 fused_ordering(648) 00:13:55.808 fused_ordering(649) 00:13:55.808 fused_ordering(650) 00:13:55.808 fused_ordering(651) 00:13:55.808 fused_ordering(652) 00:13:55.808 fused_ordering(653) 00:13:55.808 fused_ordering(654) 00:13:55.808 fused_ordering(655) 00:13:55.808 fused_ordering(656) 00:13:55.808 fused_ordering(657) 00:13:55.808 fused_ordering(658) 00:13:55.808 fused_ordering(659) 00:13:55.808 fused_ordering(660) 00:13:55.808 fused_ordering(661) 00:13:55.808 fused_ordering(662) 00:13:55.808 fused_ordering(663) 00:13:55.808 fused_ordering(664) 00:13:55.808 fused_ordering(665) 00:13:55.808 fused_ordering(666) 00:13:55.808 fused_ordering(667) 00:13:55.808 fused_ordering(668) 00:13:55.808 fused_ordering(669) 00:13:55.808 fused_ordering(670) 00:13:55.808 fused_ordering(671) 00:13:55.808 fused_ordering(672) 00:13:55.808 fused_ordering(673) 00:13:55.808 fused_ordering(674) 00:13:55.808 fused_ordering(675) 00:13:55.808 fused_ordering(676) 00:13:55.808 fused_ordering(677) 00:13:55.808 fused_ordering(678) 00:13:55.808 fused_ordering(679) 00:13:55.808 fused_ordering(680) 00:13:55.808 fused_ordering(681) 00:13:55.808 fused_ordering(682) 00:13:55.808 fused_ordering(683) 00:13:55.808 fused_ordering(684) 00:13:55.808 fused_ordering(685) 00:13:55.808 fused_ordering(686) 00:13:55.808 fused_ordering(687) 00:13:55.808 fused_ordering(688) 00:13:55.808 fused_ordering(689) 00:13:55.808 fused_ordering(690) 00:13:55.808 fused_ordering(691) 00:13:55.808 fused_ordering(692) 00:13:55.808 fused_ordering(693) 00:13:55.808 fused_ordering(694) 00:13:55.808 fused_ordering(695) 00:13:55.808 fused_ordering(696) 00:13:55.808 fused_ordering(697) 00:13:55.808 fused_ordering(698) 00:13:55.808 fused_ordering(699) 00:13:55.808 fused_ordering(700) 00:13:55.808 fused_ordering(701) 00:13:55.808 fused_ordering(702) 00:13:55.808 fused_ordering(703) 00:13:55.808 fused_ordering(704) 00:13:55.808 fused_ordering(705) 00:13:55.808 fused_ordering(706) 00:13:55.808 fused_ordering(707) 00:13:55.808 fused_ordering(708) 00:13:55.808 fused_ordering(709) 00:13:55.808 fused_ordering(710) 00:13:55.808 fused_ordering(711) 00:13:55.808 fused_ordering(712) 00:13:55.808 fused_ordering(713) 00:13:55.808 fused_ordering(714) 00:13:55.808 fused_ordering(715) 00:13:55.808 fused_ordering(716) 00:13:55.808 fused_ordering(717) 00:13:55.808 fused_ordering(718) 00:13:55.808 fused_ordering(719) 00:13:55.808 fused_ordering(720) 00:13:55.808 fused_ordering(721) 00:13:55.808 fused_ordering(722) 00:13:55.808 fused_ordering(723) 00:13:55.808 fused_ordering(724) 00:13:55.808 fused_ordering(725) 00:13:55.808 fused_ordering(726) 00:13:55.808 fused_ordering(727) 00:13:55.808 fused_ordering(728) 00:13:55.808 fused_ordering(729) 00:13:55.808 fused_ordering(730) 00:13:55.808 fused_ordering(731) 00:13:55.808 fused_ordering(732) 00:13:55.808 fused_ordering(733) 00:13:55.808 fused_ordering(734) 00:13:55.808 fused_ordering(735) 00:13:55.808 fused_ordering(736) 00:13:55.808 fused_ordering(737) 00:13:55.808 fused_ordering(738) 00:13:55.808 fused_ordering(739) 00:13:55.808 fused_ordering(740) 00:13:55.808 fused_ordering(741) 00:13:55.808 fused_ordering(742) 00:13:55.808 fused_ordering(743) 00:13:55.808 fused_ordering(744) 00:13:55.808 fused_ordering(745) 00:13:55.808 fused_ordering(746) 00:13:55.808 fused_ordering(747) 00:13:55.808 fused_ordering(748) 00:13:55.808 fused_ordering(749) 00:13:55.808 fused_ordering(750) 00:13:55.808 fused_ordering(751) 00:13:55.808 fused_ordering(752) 00:13:55.808 fused_ordering(753) 00:13:55.808 fused_ordering(754) 00:13:55.808 fused_ordering(755) 00:13:55.808 fused_ordering(756) 00:13:55.808 fused_ordering(757) 00:13:55.808 fused_ordering(758) 00:13:55.808 fused_ordering(759) 00:13:55.808 fused_ordering(760) 00:13:55.808 fused_ordering(761) 00:13:55.808 fused_ordering(762) 00:13:55.808 fused_ordering(763) 00:13:55.808 fused_ordering(764) 00:13:55.808 fused_ordering(765) 00:13:55.808 fused_ordering(766) 00:13:55.808 fused_ordering(767) 00:13:55.808 fused_ordering(768) 00:13:55.808 fused_ordering(769) 00:13:55.808 fused_ordering(770) 00:13:55.808 fused_ordering(771) 00:13:55.808 fused_ordering(772) 00:13:55.808 fused_ordering(773) 00:13:55.808 fused_ordering(774) 00:13:55.808 fused_ordering(775) 00:13:55.808 fused_ordering(776) 00:13:55.808 fused_ordering(777) 00:13:55.808 fused_ordering(778) 00:13:55.808 fused_ordering(779) 00:13:55.808 fused_ordering(780) 00:13:55.808 fused_ordering(781) 00:13:55.808 fused_ordering(782) 00:13:55.808 fused_ordering(783) 00:13:55.808 fused_ordering(784) 00:13:55.808 fused_ordering(785) 00:13:55.808 fused_ordering(786) 00:13:55.808 fused_ordering(787) 00:13:55.808 fused_ordering(788) 00:13:55.808 fused_ordering(789) 00:13:55.808 fused_ordering(790) 00:13:55.808 fused_ordering(791) 00:13:55.808 fused_ordering(792) 00:13:55.808 fused_ordering(793) 00:13:55.808 fused_ordering(794) 00:13:55.808 fused_ordering(795) 00:13:55.808 fused_ordering(796) 00:13:55.808 fused_ordering(797) 00:13:55.808 fused_ordering(798) 00:13:55.808 fused_ordering(799) 00:13:55.808 fused_ordering(800) 00:13:55.808 fused_ordering(801) 00:13:55.808 fused_ordering(802) 00:13:55.808 fused_ordering(803) 00:13:55.808 fused_ordering(804) 00:13:55.808 fused_ordering(805) 00:13:55.808 fused_ordering(806) 00:13:55.808 fused_ordering(807) 00:13:55.808 fused_ordering(808) 00:13:55.808 fused_ordering(809) 00:13:55.808 fused_ordering(810) 00:13:55.808 fused_ordering(811) 00:13:55.808 fused_ordering(812) 00:13:55.808 fused_ordering(813) 00:13:55.808 fused_ordering(814) 00:13:55.808 fused_ordering(815) 00:13:55.808 fused_ordering(816) 00:13:55.808 fused_ordering(817) 00:13:55.808 fused_ordering(818) 00:13:55.808 fused_ordering(819) 00:13:55.808 fused_ordering(820) 00:13:56.069 fused_ordering(821) 00:13:56.069 fused_ordering(822) 00:13:56.069 fused_ordering(823) 00:13:56.069 fused_ordering(824) 00:13:56.069 fused_ordering(825) 00:13:56.069 fused_ordering(826) 00:13:56.069 fused_ordering(827) 00:13:56.069 fused_ordering(828) 00:13:56.069 fused_ordering(829) 00:13:56.069 fused_ordering(830) 00:13:56.069 fused_ordering(831) 00:13:56.069 fused_ordering(832) 00:13:56.069 fused_ordering(833) 00:13:56.069 fused_ordering(834) 00:13:56.069 fused_ordering(835) 00:13:56.069 fused_ordering(836) 00:13:56.069 fused_ordering(837) 00:13:56.069 fused_ordering(838) 00:13:56.069 fused_ordering(839) 00:13:56.069 fused_ordering(840) 00:13:56.069 fused_ordering(841) 00:13:56.069 fused_ordering(842) 00:13:56.069 fused_ordering(843) 00:13:56.069 fused_ordering(844) 00:13:56.069 fused_ordering(845) 00:13:56.069 fused_ordering(846) 00:13:56.069 fused_ordering(847) 00:13:56.069 fused_ordering(848) 00:13:56.069 fused_ordering(849) 00:13:56.069 fused_ordering(850) 00:13:56.069 fused_ordering(851) 00:13:56.069 fused_ordering(852) 00:13:56.069 fused_ordering(853) 00:13:56.069 fused_ordering(854) 00:13:56.069 fused_ordering(855) 00:13:56.069 fused_ordering(856) 00:13:56.069 fused_ordering(857) 00:13:56.069 fused_ordering(858) 00:13:56.069 fused_ordering(859) 00:13:56.069 fused_ordering(860) 00:13:56.069 fused_ordering(861) 00:13:56.069 fused_ordering(862) 00:13:56.069 fused_ordering(863) 00:13:56.069 fused_ordering(864) 00:13:56.069 fused_ordering(865) 00:13:56.069 fused_ordering(866) 00:13:56.069 fused_ordering(867) 00:13:56.069 fused_ordering(868) 00:13:56.069 fused_ordering(869) 00:13:56.069 fused_ordering(870) 00:13:56.069 fused_ordering(871) 00:13:56.069 fused_ordering(872) 00:13:56.069 fused_ordering(873) 00:13:56.069 fused_ordering(874) 00:13:56.069 fused_ordering(875) 00:13:56.069 fused_ordering(876) 00:13:56.069 fused_ordering(877) 00:13:56.069 fused_ordering(878) 00:13:56.069 fused_ordering(879) 00:13:56.069 fused_ordering(880) 00:13:56.069 fused_ordering(881) 00:13:56.069 fused_ordering(882) 00:13:56.069 fused_ordering(883) 00:13:56.069 fused_ordering(884) 00:13:56.069 fused_ordering(885) 00:13:56.069 fused_ordering(886) 00:13:56.069 fused_ordering(887) 00:13:56.069 fused_ordering(888) 00:13:56.069 fused_ordering(889) 00:13:56.069 fused_ordering(890) 00:13:56.069 fused_ordering(891) 00:13:56.069 fused_ordering(892) 00:13:56.069 fused_ordering(893) 00:13:56.069 fused_ordering(894) 00:13:56.069 fused_ordering(895) 00:13:56.069 fused_ordering(896) 00:13:56.069 fused_ordering(897) 00:13:56.069 fused_ordering(898) 00:13:56.069 fused_ordering(899) 00:13:56.069 fused_ordering(900) 00:13:56.069 fused_ordering(901) 00:13:56.069 fused_ordering(902) 00:13:56.069 fused_ordering(903) 00:13:56.069 fused_ordering(904) 00:13:56.069 fused_ordering(905) 00:13:56.069 fused_ordering(906) 00:13:56.069 fused_ordering(907) 00:13:56.069 fused_ordering(908) 00:13:56.069 fused_ordering(909) 00:13:56.069 fused_ordering(910) 00:13:56.069 fused_ordering(911) 00:13:56.069 fused_ordering(912) 00:13:56.069 fused_ordering(913) 00:13:56.069 fused_ordering(914) 00:13:56.069 fused_ordering(915) 00:13:56.069 fused_ordering(916) 00:13:56.069 fused_ordering(917) 00:13:56.069 fused_ordering(918) 00:13:56.069 fused_ordering(919) 00:13:56.069 fused_ordering(920) 00:13:56.069 fused_ordering(921) 00:13:56.069 fused_ordering(922) 00:13:56.069 fused_ordering(923) 00:13:56.069 fused_ordering(924) 00:13:56.069 fused_ordering(925) 00:13:56.069 fused_ordering(926) 00:13:56.069 fused_ordering(927) 00:13:56.069 fused_ordering(928) 00:13:56.069 fused_ordering(929) 00:13:56.069 fused_ordering(930) 00:13:56.069 fused_ordering(931) 00:13:56.069 fused_ordering(932) 00:13:56.069 fused_ordering(933) 00:13:56.069 fused_ordering(934) 00:13:56.069 fused_ordering(935) 00:13:56.069 fused_ordering(936) 00:13:56.069 fused_ordering(937) 00:13:56.069 fused_ordering(938) 00:13:56.069 fused_ordering(939) 00:13:56.069 fused_ordering(940) 00:13:56.069 fused_ordering(941) 00:13:56.069 fused_ordering(942) 00:13:56.069 fused_ordering(943) 00:13:56.069 fused_ordering(944) 00:13:56.069 fused_ordering(945) 00:13:56.069 fused_ordering(946) 00:13:56.069 fused_ordering(947) 00:13:56.069 fused_ordering(948) 00:13:56.069 fused_ordering(949) 00:13:56.069 fused_ordering(950) 00:13:56.069 fused_ordering(951) 00:13:56.069 fused_ordering(952) 00:13:56.069 fused_ordering(953) 00:13:56.069 fused_ordering(954) 00:13:56.069 fused_ordering(955) 00:13:56.069 fused_ordering(956) 00:13:56.069 fused_ordering(957) 00:13:56.069 fused_ordering(958) 00:13:56.069 fused_ordering(959) 00:13:56.069 fused_ordering(960) 00:13:56.069 fused_ordering(961) 00:13:56.069 fused_ordering(962) 00:13:56.069 fused_ordering(963) 00:13:56.069 fused_ordering(964) 00:13:56.069 fused_ordering(965) 00:13:56.069 fused_ordering(966) 00:13:56.069 fused_ordering(967) 00:13:56.069 fused_ordering(968) 00:13:56.069 fused_ordering(969) 00:13:56.069 fused_ordering(970) 00:13:56.069 fused_ordering(971) 00:13:56.069 fused_ordering(972) 00:13:56.069 fused_ordering(973) 00:13:56.069 fused_ordering(974) 00:13:56.069 fused_ordering(975) 00:13:56.069 fused_ordering(976) 00:13:56.069 fused_ordering(977) 00:13:56.069 fused_ordering(978) 00:13:56.069 fused_ordering(979) 00:13:56.069 fused_ordering(980) 00:13:56.069 fused_ordering(981) 00:13:56.069 fused_ordering(982) 00:13:56.069 fused_ordering(983) 00:13:56.069 fused_ordering(984) 00:13:56.069 fused_ordering(985) 00:13:56.070 fused_ordering(986) 00:13:56.070 fused_ordering(987) 00:13:56.070 fused_ordering(988) 00:13:56.070 fused_ordering(989) 00:13:56.070 fused_ordering(990) 00:13:56.070 fused_ordering(991) 00:13:56.070 fused_ordering(992) 00:13:56.070 fused_ordering(993) 00:13:56.070 fused_ordering(994) 00:13:56.070 fused_ordering(995) 00:13:56.070 fused_ordering(996) 00:13:56.070 fused_ordering(997) 00:13:56.070 fused_ordering(998) 00:13:56.070 fused_ordering(999) 00:13:56.070 fused_ordering(1000) 00:13:56.070 fused_ordering(1001) 00:13:56.070 fused_ordering(1002) 00:13:56.070 fused_ordering(1003) 00:13:56.070 fused_ordering(1004) 00:13:56.070 fused_ordering(1005) 00:13:56.070 fused_ordering(1006) 00:13:56.070 fused_ordering(1007) 00:13:56.070 fused_ordering(1008) 00:13:56.070 fused_ordering(1009) 00:13:56.070 fused_ordering(1010) 00:13:56.070 fused_ordering(1011) 00:13:56.070 fused_ordering(1012) 00:13:56.070 fused_ordering(1013) 00:13:56.070 fused_ordering(1014) 00:13:56.070 fused_ordering(1015) 00:13:56.070 fused_ordering(1016) 00:13:56.070 fused_ordering(1017) 00:13:56.070 fused_ordering(1018) 00:13:56.070 fused_ordering(1019) 00:13:56.070 fused_ordering(1020) 00:13:56.070 fused_ordering(1021) 00:13:56.070 fused_ordering(1022) 00:13:56.070 fused_ordering(1023) 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:56.070 rmmod nvme_rdma 00:13:56.070 rmmod nvme_fabrics 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2757732 ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2757732 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2757732 ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2757732 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757732 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757732' 00:13:56.070 killing process with pid 2757732 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2757732 00:13:56.070 12:28:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2757732 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:56.639 00:13:56.639 real 0m5.092s 00:13:56.639 user 0m4.564s 00:13:56.639 sys 0m2.294s 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.639 ************************************ 00:13:56.639 END TEST nvmf_fused_ordering 00:13:56.639 ************************************ 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.639 ************************************ 00:13:56.639 START TEST nvmf_ns_masking 00:13:56.639 ************************************ 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:56.639 * Looking for test storage... 00:13:56.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.639 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.899 --rc genhtml_branch_coverage=1 00:13:56.899 --rc genhtml_function_coverage=1 00:13:56.899 --rc genhtml_legend=1 00:13:56.899 --rc geninfo_all_blocks=1 00:13:56.899 --rc geninfo_unexecuted_blocks=1 00:13:56.899 00:13:56.899 ' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.899 --rc genhtml_branch_coverage=1 00:13:56.899 --rc genhtml_function_coverage=1 00:13:56.899 --rc genhtml_legend=1 00:13:56.899 --rc geninfo_all_blocks=1 00:13:56.899 --rc geninfo_unexecuted_blocks=1 00:13:56.899 00:13:56.899 ' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.899 --rc genhtml_branch_coverage=1 00:13:56.899 --rc genhtml_function_coverage=1 00:13:56.899 --rc genhtml_legend=1 00:13:56.899 --rc geninfo_all_blocks=1 00:13:56.899 --rc geninfo_unexecuted_blocks=1 00:13:56.899 00:13:56.899 ' 00:13:56.899 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.899 --rc genhtml_branch_coverage=1 00:13:56.899 --rc genhtml_function_coverage=1 00:13:56.899 --rc genhtml_legend=1 00:13:56.900 --rc geninfo_all_blocks=1 00:13:56.900 --rc geninfo_unexecuted_blocks=1 00:13:56.900 00:13:56.900 ' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.900 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9763b649-781d-41ed-a518-77ba2d1fb0eb 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6afa1e83-9a06-4c11-9952-e4dabc7202bc 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a71661b0-16cc-4eb9-bcc0-1970d5d37abe 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.900 12:28:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:13:59.437 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:13:59.437 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:13:59.437 Found net devices under 0000:83:00.0: mlx_0_0 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:13:59.437 Found net devices under 0000:83:00.1: mlx_0_1 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:59.437 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:59.438 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:59.438 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:13:59.438 altname enp131s0f0np0 00:13:59.438 inet 192.168.100.8/24 scope global mlx_0_0 00:13:59.438 valid_lft forever preferred_lft forever 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:59.438 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:59.438 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:13:59.438 altname enp131s0f1np1 00:13:59.438 inet 192.168.100.9/24 scope global mlx_0_1 00:13:59.438 valid_lft forever preferred_lft forever 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:59.438 192.168.100.9' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:59.438 192.168.100.9' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:59.438 192.168.100.9' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2759244 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2759244 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2759244 ']' 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.438 12:28:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.438 [2024-11-20 12:28:04.983681] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:59.438 [2024-11-20 12:28:04.983769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.438 [2024-11-20 12:28:05.054525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.438 [2024-11-20 12:28:05.115247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.438 [2024-11-20 12:28:05.115312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.438 [2024-11-20 12:28:05.115328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.438 [2024-11-20 12:28:05.115341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.438 [2024-11-20 12:28:05.115352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.438 [2024-11-20 12:28:05.115856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.698 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:59.956 [2024-11-20 12:28:05.635541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19c49e0/0x19c8ed0) succeed. 00:13:59.956 [2024-11-20 12:28:05.649206] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19c5e90/0x1a0a570) succeed. 00:13:59.956 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:59.956 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:59.956 12:28:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.523 Malloc1 00:14:00.523 12:28:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.782 Malloc2 00:14:00.782 12:28:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.040 12:28:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:01.606 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:01.864 [2024-11-20 12:28:07.398543] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:01.864 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:01.864 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a71661b0-16cc-4eb9-bcc0-1970d5d37abe -a 192.168.100.8 -s 4420 -i 4 00:14:02.123 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.123 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:02.123 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.123 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:02.123 12:28:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:04.024 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:04.024 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.024 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:04.024 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:04.025 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.025 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:04.025 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.025 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.283 [ 0]:0x1 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d4c7d3c609c45cbbd7c1e4048c60b25 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d4c7d3c609c45cbbd7c1e4048c60b25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.283 12:28:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.541 [ 0]:0x1 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d4c7d3c609c45cbbd7c1e4048c60b25 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d4c7d3c609c45cbbd7c1e4048c60b25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.541 [ 1]:0x2 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:04.541 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.107 12:28:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.366 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:05.623 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:05.623 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a71661b0-16cc-4eb9-bcc0-1970d5d37abe -a 192.168.100.8 -s 4420 -i 4 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:06.189 12:28:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.088 [ 0]:0x2 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.088 12:28:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.654 [ 0]:0x1 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d4c7d3c609c45cbbd7c1e4048c60b25 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d4c7d3c609c45cbbd7c1e4048c60b25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.654 [ 1]:0x2 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.654 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.912 [ 0]:0x2 00:14:08.912 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.913 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.170 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:09.170 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.170 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:09.170 12:28:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.428 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.686 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:09.686 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a71661b0-16cc-4eb9-bcc0-1970d5d37abe -a 192.168.100.8 -s 4420 -i 4 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:09.944 12:28:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.471 [ 0]:0x1 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d4c7d3c609c45cbbd7c1e4048c60b25 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d4c7d3c609c45cbbd7c1e4048c60b25 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.471 [ 1]:0x2 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.471 12:28:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.471 [ 0]:0x2 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.471 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:12.729 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:12.987 [2024-11-20 12:28:18.559308] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:12.987 request: 00:14:12.987 { 00:14:12.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.987 "nsid": 2, 00:14:12.987 "host": "nqn.2016-06.io.spdk:host1", 00:14:12.987 "method": "nvmf_ns_remove_host", 00:14:12.987 "req_id": 1 00:14:12.987 } 00:14:12.987 Got JSON-RPC error response 00:14:12.987 response: 00:14:12.987 { 00:14:12.987 "code": -32602, 00:14:12.987 "message": "Invalid parameters" 00:14:12.987 } 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.987 [ 0]:0x2 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28a414f47e47410c92f9f2aab79ab3ef 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28a414f47e47410c92f9f2aab79ab3ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:12.987 12:28:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2760659 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2760659 /var/tmp/host.sock 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2760659 ']' 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.555 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:13.555 [2024-11-20 12:28:19.071547] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:13.555 [2024-11-20 12:28:19.071637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760659 ] 00:14:13.555 [2024-11-20 12:28:19.143017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.555 [2024-11-20 12:28:19.205768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.814 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.814 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:13.814 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.072 12:28:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.638 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9763b649-781d-41ed-a518-77ba2d1fb0eb 00:14:14.638 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.638 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9763B649781D41EDA51877BA2D1FB0EB -i 00:14:14.897 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6afa1e83-9a06-4c11-9952-e4dabc7202bc 00:14:14.897 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.897 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6AFA1E839A064C119952E4DABC7202BC -i 00:14:15.155 12:28:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.413 12:28:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:15.980 12:28:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:15.980 12:28:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:16.239 nvme0n1 00:14:16.239 12:28:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:16.239 12:28:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:16.497 nvme1n2 00:14:16.755 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:16.755 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:16.755 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:16.755 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:16.755 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:17.013 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:17.013 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:17.013 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:17.013 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:17.271 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9763b649-781d-41ed-a518-77ba2d1fb0eb == \9\7\6\3\b\6\4\9\-\7\8\1\d\-\4\1\e\d\-\a\5\1\8\-\7\7\b\a\2\d\1\f\b\0\e\b ]] 00:14:17.271 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:17.271 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:17.271 12:28:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:17.530 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6afa1e83-9a06-4c11-9952-e4dabc7202bc == \6\a\f\a\1\e\8\3\-\9\a\0\6\-\4\c\1\1\-\9\9\5\2\-\e\4\d\a\b\c\7\2\0\2\b\c ]] 00:14:17.530 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.097 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9763b649-781d-41ed-a518-77ba2d1fb0eb 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9763B649781D41EDA51877BA2D1FB0EB 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9763B649781D41EDA51877BA2D1FB0EB 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:18.356 12:28:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9763B649781D41EDA51877BA2D1FB0EB 00:14:18.614 [2024-11-20 12:28:24.272474] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:18.614 [2024-11-20 12:28:24.272536] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:18.614 [2024-11-20 12:28:24.272555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.614 request: 00:14:18.614 { 00:14:18.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.614 "namespace": { 00:14:18.614 "bdev_name": "invalid", 00:14:18.614 "nsid": 1, 00:14:18.614 "nguid": "9763B649781D41EDA51877BA2D1FB0EB", 00:14:18.614 "no_auto_visible": false 00:14:18.614 }, 00:14:18.614 "method": "nvmf_subsystem_add_ns", 00:14:18.614 "req_id": 1 00:14:18.614 } 00:14:18.614 Got JSON-RPC error response 00:14:18.614 response: 00:14:18.614 { 00:14:18.614 "code": -32602, 00:14:18.614 "message": "Invalid parameters" 00:14:18.614 } 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9763b649-781d-41ed-a518-77ba2d1fb0eb 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:18.614 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9763B649781D41EDA51877BA2D1FB0EB -i 00:14:18.872 12:28:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2760659 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2760659 ']' 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2760659 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:21.399 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.400 12:28:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760659 00:14:21.400 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:21.400 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:21.400 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760659' 00:14:21.400 killing process with pid 2760659 00:14:21.400 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2760659 00:14:21.400 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2760659 00:14:21.657 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:22.224 rmmod nvme_rdma 00:14:22.224 rmmod nvme_fabrics 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2759244 ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2759244 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2759244 ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2759244 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759244 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759244' 00:14:22.224 killing process with pid 2759244 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2759244 00:14:22.224 12:28:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2759244 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:22.482 00:14:22.482 real 0m25.767s 00:14:22.482 user 0m41.857s 00:14:22.482 sys 0m4.329s 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.482 ************************************ 00:14:22.482 END TEST nvmf_ns_masking 00:14:22.482 ************************************ 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.482 ************************************ 00:14:22.482 START TEST nvmf_nvme_cli 00:14:22.482 ************************************ 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:22.482 * Looking for test storage... 00:14:22.482 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:22.482 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:22.483 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.742 --rc genhtml_branch_coverage=1 00:14:22.742 --rc genhtml_function_coverage=1 00:14:22.742 --rc genhtml_legend=1 00:14:22.742 --rc geninfo_all_blocks=1 00:14:22.742 --rc geninfo_unexecuted_blocks=1 00:14:22.742 00:14:22.742 ' 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.742 --rc genhtml_branch_coverage=1 00:14:22.742 --rc genhtml_function_coverage=1 00:14:22.742 --rc genhtml_legend=1 00:14:22.742 --rc geninfo_all_blocks=1 00:14:22.742 --rc geninfo_unexecuted_blocks=1 00:14:22.742 00:14:22.742 ' 00:14:22.742 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.742 --rc genhtml_branch_coverage=1 00:14:22.742 --rc genhtml_function_coverage=1 00:14:22.742 --rc genhtml_legend=1 00:14:22.742 --rc geninfo_all_blocks=1 00:14:22.742 --rc geninfo_unexecuted_blocks=1 00:14:22.742 00:14:22.742 ' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:22.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.743 --rc genhtml_branch_coverage=1 00:14:22.743 --rc genhtml_function_coverage=1 00:14:22.743 --rc genhtml_legend=1 00:14:22.743 --rc geninfo_all_blocks=1 00:14:22.743 --rc geninfo_unexecuted_blocks=1 00:14:22.743 00:14:22.743 ' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.743 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:22.743 12:28:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:14:25.282 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:25.282 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:14:25.283 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:14:25.283 Found net devices under 0000:83:00.0: mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:14:25.283 Found net devices under 0000:83:00.1: mlx_0_1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:25.283 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:25.283 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:14:25.283 altname enp131s0f0np0 00:14:25.283 inet 192.168.100.8/24 scope global mlx_0_0 00:14:25.283 valid_lft forever preferred_lft forever 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:25.283 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:25.283 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:14:25.283 altname enp131s0f1np1 00:14:25.283 inet 192.168.100.9/24 scope global mlx_0_1 00:14:25.283 valid_lft forever preferred_lft forever 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:25.283 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:25.284 192.168.100.9' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:25.284 192.168.100.9' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:25.284 192.168.100.9' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2762847 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2762847 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2762847 ']' 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.284 12:28:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.284 [2024-11-20 12:28:30.761866] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:25.284 [2024-11-20 12:28:30.761982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.284 [2024-11-20 12:28:30.836445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.284 [2024-11-20 12:28:30.902429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.284 [2024-11-20 12:28:30.902498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.284 [2024-11-20 12:28:30.902515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.284 [2024-11-20 12:28:30.902528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.284 [2024-11-20 12:28:30.902539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.284 [2024-11-20 12:28:30.903889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.284 [2024-11-20 12:28:30.904006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.284 [2024-11-20 12:28:30.904051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.284 [2024-11-20 12:28:30.904055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.555 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.555 [2024-11-20 12:28:31.145717] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2439df0/0x243e2e0) succeed. 00:14:25.555 [2024-11-20 12:28:31.161304] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x243b480/0x247f980) succeed. 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 Malloc0 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 Malloc1 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 [2024-11-20 12:28:31.408461] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -a 192.168.100.8 -s 4420 00:14:25.876 00:14:25.876 Discovery Log Number of Records 2, Generation counter 2 00:14:25.876 =====Discovery Log Entry 0====== 00:14:25.876 trtype: rdma 00:14:25.876 adrfam: ipv4 00:14:25.876 subtype: current discovery subsystem 00:14:25.876 treq: not required 00:14:25.876 portid: 0 00:14:25.876 trsvcid: 4420 00:14:25.876 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:25.876 traddr: 192.168.100.8 00:14:25.876 eflags: explicit discovery connections, duplicate discovery information 00:14:25.876 rdma_prtype: not specified 00:14:25.876 rdma_qptype: connected 00:14:25.876 rdma_cms: rdma-cm 00:14:25.876 rdma_pkey: 0x0000 00:14:25.876 =====Discovery Log Entry 1====== 00:14:25.876 trtype: rdma 00:14:25.876 adrfam: ipv4 00:14:25.876 subtype: nvme subsystem 00:14:25.876 treq: not required 00:14:25.876 portid: 0 00:14:25.876 trsvcid: 4420 00:14:25.876 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:25.876 traddr: 192.168.100.8 00:14:25.876 eflags: none 00:14:25.876 rdma_prtype: not specified 00:14:25.876 rdma_qptype: connected 00:14:25.876 rdma_cms: rdma-cm 00:14:25.876 rdma_pkey: 0x0000 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.876 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:25.877 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:25.877 12:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:26.819 12:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:29.348 /dev/nvme0n2 ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:29.348 12:28:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:29.914 rmmod nvme_rdma 00:14:29.914 rmmod nvme_fabrics 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2762847 ']' 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2762847 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2762847 ']' 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2762847 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.914 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762847 00:14:30.172 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.172 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.172 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762847' 00:14:30.172 killing process with pid 2762847 00:14:30.172 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2762847 00:14:30.172 12:28:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2762847 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:30.430 00:14:30.430 real 0m7.931s 00:14:30.430 user 0m21.069s 00:14:30.430 sys 0m2.353s 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.430 ************************************ 00:14:30.430 END TEST nvmf_nvme_cli 00:14:30.430 ************************************ 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.430 ************************************ 00:14:30.430 START TEST nvmf_auth_target 00:14:30.430 ************************************ 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:30.430 * Looking for test storage... 00:14:30.430 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:30.430 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.690 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.691 --rc genhtml_branch_coverage=1 00:14:30.691 --rc genhtml_function_coverage=1 00:14:30.691 --rc genhtml_legend=1 00:14:30.691 --rc geninfo_all_blocks=1 00:14:30.691 --rc geninfo_unexecuted_blocks=1 00:14:30.691 00:14:30.691 ' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.691 --rc genhtml_branch_coverage=1 00:14:30.691 --rc genhtml_function_coverage=1 00:14:30.691 --rc genhtml_legend=1 00:14:30.691 --rc geninfo_all_blocks=1 00:14:30.691 --rc geninfo_unexecuted_blocks=1 00:14:30.691 00:14:30.691 ' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.691 --rc genhtml_branch_coverage=1 00:14:30.691 --rc genhtml_function_coverage=1 00:14:30.691 --rc genhtml_legend=1 00:14:30.691 --rc geninfo_all_blocks=1 00:14:30.691 --rc geninfo_unexecuted_blocks=1 00:14:30.691 00:14:30.691 ' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.691 --rc genhtml_branch_coverage=1 00:14:30.691 --rc genhtml_function_coverage=1 00:14:30.691 --rc genhtml_legend=1 00:14:30.691 --rc geninfo_all_blocks=1 00:14:30.691 --rc geninfo_unexecuted_blocks=1 00:14:30.691 00:14:30.691 ' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.691 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.691 12:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:14:33.229 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.229 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:14:33.230 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:14:33.230 Found net devices under 0000:83:00.0: mlx_0_0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:14:33.230 Found net devices under 0000:83:00.1: mlx_0_1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:33.230 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:33.230 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:14:33.230 altname enp131s0f0np0 00:14:33.230 inet 192.168.100.8/24 scope global mlx_0_0 00:14:33.230 valid_lft forever preferred_lft forever 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:33.230 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:33.230 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:14:33.230 altname enp131s0f1np1 00:14:33.230 inet 192.168.100.9/24 scope global mlx_0_1 00:14:33.230 valid_lft forever preferred_lft forever 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:33.230 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:33.231 192.168.100.9' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:33.231 192.168.100.9' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:33.231 192.168.100.9' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2764678 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2764678 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2764678 ']' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2764703 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc9370767851b96fc17f534da7d2dee17ed03a6288a9730a 00:14:33.231 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MTC 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc9370767851b96fc17f534da7d2dee17ed03a6288a9730a 0 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc9370767851b96fc17f534da7d2dee17ed03a6288a9730a 0 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc9370767851b96fc17f534da7d2dee17ed03a6288a9730a 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:33.490 12:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MTC 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MTC 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.MTC 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2d623e5d811da312db47bd6600ee40227b26ceb8a53f4cd2cf58d6ccc55920aa 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4Dx 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2d623e5d811da312db47bd6600ee40227b26ceb8a53f4cd2cf58d6ccc55920aa 3 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2d623e5d811da312db47bd6600ee40227b26ceb8a53f4cd2cf58d6ccc55920aa 3 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2d623e5d811da312db47bd6600ee40227b26ceb8a53f4cd2cf58d6ccc55920aa 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4Dx 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4Dx 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4Dx 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=db2f177a6498cac5182b37d5a92dde34 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qxw 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key db2f177a6498cac5182b37d5a92dde34 1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 db2f177a6498cac5182b37d5a92dde34 1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=db2f177a6498cac5182b37d5a92dde34 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qxw 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qxw 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.qxw 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c92f3ac8c6d08c1025b4d1b457e41dbb1d81938c24c8564 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Zr0 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c92f3ac8c6d08c1025b4d1b457e41dbb1d81938c24c8564 2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c92f3ac8c6d08c1025b4d1b457e41dbb1d81938c24c8564 2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c92f3ac8c6d08c1025b4d1b457e41dbb1d81938c24c8564 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Zr0 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Zr0 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Zr0 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff75e617a68f01e4b1ecf7245f9e73ab543c32261c1ed199 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2Pj 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff75e617a68f01e4b1ecf7245f9e73ab543c32261c1ed199 2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff75e617a68f01e4b1ecf7245f9e73ab543c32261c1ed199 2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff75e617a68f01e4b1ecf7245f9e73ab543c32261c1ed199 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:33.491 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2Pj 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2Pj 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.2Pj 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a54aa577c39e99c09d350612e35fe474 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XZs 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a54aa577c39e99c09d350612e35fe474 1 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a54aa577c39e99c09d350612e35fe474 1 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a54aa577c39e99c09d350612e35fe474 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XZs 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XZs 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XZs 00:14:33.750 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ccefe836d6ac5a635ffc4467041ebc5aae81e88de54416d576a6fec5da258d2 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oGb 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ccefe836d6ac5a635ffc4467041ebc5aae81e88de54416d576a6fec5da258d2 3 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ccefe836d6ac5a635ffc4467041ebc5aae81e88de54416d576a6fec5da258d2 3 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ccefe836d6ac5a635ffc4467041ebc5aae81e88de54416d576a6fec5da258d2 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oGb 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oGb 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oGb 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2764678 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2764678 ']' 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.751 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2764703 /var/tmp/host.sock 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2764703 ']' 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:34.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.009 12:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.574 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MTC 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MTC 00:14:34.575 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MTC 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4Dx ]] 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Dx 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Dx 00:14:34.833 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Dx 00:14:35.092 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:35.092 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qxw 00:14:35.092 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.092 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.349 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.349 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qxw 00:14:35.349 12:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qxw 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Zr0 ]] 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Zr0 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Zr0 00:14:35.606 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Zr0 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2Pj 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2Pj 00:14:35.863 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2Pj 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XZs ]] 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XZs 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XZs 00:14:36.121 12:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XZs 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oGb 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oGb 00:14:36.686 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oGb 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.944 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.203 12:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.769 00:14:37.769 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.769 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.769 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.027 { 00:14:38.027 "cntlid": 1, 00:14:38.027 "qid": 0, 00:14:38.027 "state": "enabled", 00:14:38.027 "thread": "nvmf_tgt_poll_group_000", 00:14:38.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:38.027 "listen_address": { 00:14:38.027 "trtype": "RDMA", 00:14:38.027 "adrfam": "IPv4", 00:14:38.027 "traddr": "192.168.100.8", 00:14:38.027 "trsvcid": "4420" 00:14:38.027 }, 00:14:38.027 "peer_address": { 00:14:38.027 "trtype": "RDMA", 00:14:38.027 "adrfam": "IPv4", 00:14:38.027 "traddr": "192.168.100.8", 00:14:38.027 "trsvcid": "37320" 00:14:38.027 }, 00:14:38.027 "auth": { 00:14:38.027 "state": "completed", 00:14:38.027 "digest": "sha256", 00:14:38.027 "dhgroup": "null" 00:14:38.027 } 00:14:38.027 } 00:14:38.027 ]' 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:38.027 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.285 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.285 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.285 12:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.543 12:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:14:38.543 12:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:39.917 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.175 12:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.742 00:14:40.742 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.742 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.742 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.001 { 00:14:41.001 "cntlid": 3, 00:14:41.001 "qid": 0, 00:14:41.001 "state": "enabled", 00:14:41.001 "thread": "nvmf_tgt_poll_group_000", 00:14:41.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:41.001 "listen_address": { 00:14:41.001 "trtype": "RDMA", 00:14:41.001 "adrfam": "IPv4", 00:14:41.001 "traddr": "192.168.100.8", 00:14:41.001 "trsvcid": "4420" 00:14:41.001 }, 00:14:41.001 "peer_address": { 00:14:41.001 "trtype": "RDMA", 00:14:41.001 "adrfam": "IPv4", 00:14:41.001 "traddr": "192.168.100.8", 00:14:41.001 "trsvcid": "55219" 00:14:41.001 }, 00:14:41.001 "auth": { 00:14:41.001 "state": "completed", 00:14:41.001 "digest": "sha256", 00:14:41.001 "dhgroup": "null" 00:14:41.001 } 00:14:41.001 } 00:14:41.001 ]' 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.001 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.260 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.260 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.260 12:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.518 12:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:14:41.518 12:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.893 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.151 12:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.717 00:14:43.717 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.717 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.717 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.975 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.975 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.976 { 00:14:43.976 "cntlid": 5, 00:14:43.976 "qid": 0, 00:14:43.976 "state": "enabled", 00:14:43.976 "thread": "nvmf_tgt_poll_group_000", 00:14:43.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:43.976 "listen_address": { 00:14:43.976 "trtype": "RDMA", 00:14:43.976 "adrfam": "IPv4", 00:14:43.976 "traddr": "192.168.100.8", 00:14:43.976 "trsvcid": "4420" 00:14:43.976 }, 00:14:43.976 "peer_address": { 00:14:43.976 "trtype": "RDMA", 00:14:43.976 "adrfam": "IPv4", 00:14:43.976 "traddr": "192.168.100.8", 00:14:43.976 "trsvcid": "55657" 00:14:43.976 }, 00:14:43.976 "auth": { 00:14:43.976 "state": "completed", 00:14:43.976 "digest": "sha256", 00:14:43.976 "dhgroup": "null" 00:14:43.976 } 00:14:43.976 } 00:14:43.976 ]' 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:43.976 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.234 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.234 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.234 12:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.492 12:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:14:44.492 12:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:45.869 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.128 12:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.694 00:14:46.694 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.694 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.694 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.953 { 00:14:46.953 "cntlid": 7, 00:14:46.953 "qid": 0, 00:14:46.953 "state": "enabled", 00:14:46.953 "thread": "nvmf_tgt_poll_group_000", 00:14:46.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:46.953 "listen_address": { 00:14:46.953 "trtype": "RDMA", 00:14:46.953 "adrfam": "IPv4", 00:14:46.953 "traddr": "192.168.100.8", 00:14:46.953 "trsvcid": "4420" 00:14:46.953 }, 00:14:46.953 "peer_address": { 00:14:46.953 "trtype": "RDMA", 00:14:46.953 "adrfam": "IPv4", 00:14:46.953 "traddr": "192.168.100.8", 00:14:46.953 "trsvcid": "56560" 00:14:46.953 }, 00:14:46.953 "auth": { 00:14:46.953 "state": "completed", 00:14:46.953 "digest": "sha256", 00:14:46.953 "dhgroup": "null" 00:14:46.953 } 00:14:46.953 } 00:14:46.953 ]' 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.953 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.211 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:47.211 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.211 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.211 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.211 12:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.469 12:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:14:47.470 12:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:14:48.843 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:48.844 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.410 12:28:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.668 00:14:49.668 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.668 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.668 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.926 { 00:14:49.926 "cntlid": 9, 00:14:49.926 "qid": 0, 00:14:49.926 "state": "enabled", 00:14:49.926 "thread": "nvmf_tgt_poll_group_000", 00:14:49.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:49.926 "listen_address": { 00:14:49.926 "trtype": "RDMA", 00:14:49.926 "adrfam": "IPv4", 00:14:49.926 "traddr": "192.168.100.8", 00:14:49.926 "trsvcid": "4420" 00:14:49.926 }, 00:14:49.926 "peer_address": { 00:14:49.926 "trtype": "RDMA", 00:14:49.926 "adrfam": "IPv4", 00:14:49.926 "traddr": "192.168.100.8", 00:14:49.926 "trsvcid": "51603" 00:14:49.926 }, 00:14:49.926 "auth": { 00:14:49.926 "state": "completed", 00:14:49.926 "digest": "sha256", 00:14:49.926 "dhgroup": "ffdhe2048" 00:14:49.926 } 00:14:49.926 } 00:14:49.926 ]' 00:14:49.926 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.183 12:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.442 12:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:14:50.442 12:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:51.815 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.381 12:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.640 00:14:52.640 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.640 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.640 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.899 { 00:14:52.899 "cntlid": 11, 00:14:52.899 "qid": 0, 00:14:52.899 "state": "enabled", 00:14:52.899 "thread": "nvmf_tgt_poll_group_000", 00:14:52.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:52.899 "listen_address": { 00:14:52.899 "trtype": "RDMA", 00:14:52.899 "adrfam": "IPv4", 00:14:52.899 "traddr": "192.168.100.8", 00:14:52.899 "trsvcid": "4420" 00:14:52.899 }, 00:14:52.899 "peer_address": { 00:14:52.899 "trtype": "RDMA", 00:14:52.899 "adrfam": "IPv4", 00:14:52.899 "traddr": "192.168.100.8", 00:14:52.899 "trsvcid": "49535" 00:14:52.899 }, 00:14:52.899 "auth": { 00:14:52.899 "state": "completed", 00:14:52.899 "digest": "sha256", 00:14:52.899 "dhgroup": "ffdhe2048" 00:14:52.899 } 00:14:52.899 } 00:14:52.899 ]' 00:14:52.899 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.157 12:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.415 12:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:14:53.415 12:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:14:54.787 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.045 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.303 12:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.560 00:14:55.818 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.818 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.818 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.076 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.076 { 00:14:56.076 "cntlid": 13, 00:14:56.076 "qid": 0, 00:14:56.076 "state": "enabled", 00:14:56.076 "thread": "nvmf_tgt_poll_group_000", 00:14:56.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:56.076 "listen_address": { 00:14:56.076 "trtype": "RDMA", 00:14:56.076 "adrfam": "IPv4", 00:14:56.076 "traddr": "192.168.100.8", 00:14:56.076 "trsvcid": "4420" 00:14:56.076 }, 00:14:56.076 "peer_address": { 00:14:56.076 "trtype": "RDMA", 00:14:56.076 "adrfam": "IPv4", 00:14:56.076 "traddr": "192.168.100.8", 00:14:56.076 "trsvcid": "42672" 00:14:56.076 }, 00:14:56.076 "auth": { 00:14:56.076 "state": "completed", 00:14:56.076 "digest": "sha256", 00:14:56.076 "dhgroup": "ffdhe2048" 00:14:56.076 } 00:14:56.076 } 00:14:56.076 ]' 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.077 12:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.642 12:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:14:56.642 12:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:14:58.076 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.077 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.354 12:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.612 00:14:58.612 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.612 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.612 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.177 { 00:14:59.177 "cntlid": 15, 00:14:59.177 "qid": 0, 00:14:59.177 "state": "enabled", 00:14:59.177 "thread": "nvmf_tgt_poll_group_000", 00:14:59.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:14:59.177 "listen_address": { 00:14:59.177 "trtype": "RDMA", 00:14:59.177 "adrfam": "IPv4", 00:14:59.177 "traddr": "192.168.100.8", 00:14:59.177 "trsvcid": "4420" 00:14:59.177 }, 00:14:59.177 "peer_address": { 00:14:59.177 "trtype": "RDMA", 00:14:59.177 "adrfam": "IPv4", 00:14:59.177 "traddr": "192.168.100.8", 00:14:59.177 "trsvcid": "35645" 00:14:59.177 }, 00:14:59.177 "auth": { 00:14:59.177 "state": "completed", 00:14:59.177 "digest": "sha256", 00:14:59.177 "dhgroup": "ffdhe2048" 00:14:59.177 } 00:14:59.177 } 00:14:59.177 ]' 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.177 12:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.436 12:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:14:59.436 12:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:00.808 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.066 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.324 12:29:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.890 00:15:01.890 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.890 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.890 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.147 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.147 { 00:15:02.147 "cntlid": 17, 00:15:02.147 "qid": 0, 00:15:02.147 "state": "enabled", 00:15:02.147 "thread": "nvmf_tgt_poll_group_000", 00:15:02.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:02.148 "listen_address": { 00:15:02.148 "trtype": "RDMA", 00:15:02.148 "adrfam": "IPv4", 00:15:02.148 "traddr": "192.168.100.8", 00:15:02.148 "trsvcid": "4420" 00:15:02.148 }, 00:15:02.148 "peer_address": { 00:15:02.148 "trtype": "RDMA", 00:15:02.148 "adrfam": "IPv4", 00:15:02.148 "traddr": "192.168.100.8", 00:15:02.148 "trsvcid": "44469" 00:15:02.148 }, 00:15:02.148 "auth": { 00:15:02.148 "state": "completed", 00:15:02.148 "digest": "sha256", 00:15:02.148 "dhgroup": "ffdhe3072" 00:15:02.148 } 00:15:02.148 } 00:15:02.148 ]' 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.148 12:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.712 12:29:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:02.713 12:29:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.085 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.343 12:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.909 00:15:04.909 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.909 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.909 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.167 { 00:15:05.167 "cntlid": 19, 00:15:05.167 "qid": 0, 00:15:05.167 "state": "enabled", 00:15:05.167 "thread": "nvmf_tgt_poll_group_000", 00:15:05.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:05.167 "listen_address": { 00:15:05.167 "trtype": "RDMA", 00:15:05.167 "adrfam": "IPv4", 00:15:05.167 "traddr": "192.168.100.8", 00:15:05.167 "trsvcid": "4420" 00:15:05.167 }, 00:15:05.167 "peer_address": { 00:15:05.167 "trtype": "RDMA", 00:15:05.167 "adrfam": "IPv4", 00:15:05.167 "traddr": "192.168.100.8", 00:15:05.167 "trsvcid": "56474" 00:15:05.167 }, 00:15:05.167 "auth": { 00:15:05.167 "state": "completed", 00:15:05.167 "digest": "sha256", 00:15:05.167 "dhgroup": "ffdhe3072" 00:15:05.167 } 00:15:05.167 } 00:15:05.167 ]' 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.167 12:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.734 12:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:05.734 12:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.108 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.367 12:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.367 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.367 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.367 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.367 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.933 00:15:07.933 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.933 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.933 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.191 { 00:15:08.191 "cntlid": 21, 00:15:08.191 "qid": 0, 00:15:08.191 "state": "enabled", 00:15:08.191 "thread": "nvmf_tgt_poll_group_000", 00:15:08.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:08.191 "listen_address": { 00:15:08.191 "trtype": "RDMA", 00:15:08.191 "adrfam": "IPv4", 00:15:08.191 "traddr": "192.168.100.8", 00:15:08.191 "trsvcid": "4420" 00:15:08.191 }, 00:15:08.191 "peer_address": { 00:15:08.191 "trtype": "RDMA", 00:15:08.191 "adrfam": "IPv4", 00:15:08.191 "traddr": "192.168.100.8", 00:15:08.191 "trsvcid": "41777" 00:15:08.191 }, 00:15:08.191 "auth": { 00:15:08.191 "state": "completed", 00:15:08.191 "digest": "sha256", 00:15:08.191 "dhgroup": "ffdhe3072" 00:15:08.191 } 00:15:08.191 } 00:15:08.191 ]' 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.191 12:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.757 12:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:08.757 12:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.134 12:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.392 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.959 00:15:10.959 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.959 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.959 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.217 { 00:15:11.217 "cntlid": 23, 00:15:11.217 "qid": 0, 00:15:11.217 "state": "enabled", 00:15:11.217 "thread": "nvmf_tgt_poll_group_000", 00:15:11.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:11.217 "listen_address": { 00:15:11.217 "trtype": "RDMA", 00:15:11.217 "adrfam": "IPv4", 00:15:11.217 "traddr": "192.168.100.8", 00:15:11.217 "trsvcid": "4420" 00:15:11.217 }, 00:15:11.217 "peer_address": { 00:15:11.217 "trtype": "RDMA", 00:15:11.217 "adrfam": "IPv4", 00:15:11.217 "traddr": "192.168.100.8", 00:15:11.217 "trsvcid": "45693" 00:15:11.217 }, 00:15:11.217 "auth": { 00:15:11.217 "state": "completed", 00:15:11.217 "digest": "sha256", 00:15:11.217 "dhgroup": "ffdhe3072" 00:15:11.217 } 00:15:11.217 } 00:15:11.217 ]' 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.217 12:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.475 12:29:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.475 12:29:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.475 12:29:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.733 12:29:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:11.733 12:29:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.103 12:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.667 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.926 00:15:13.926 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.926 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.926 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.492 { 00:15:14.492 "cntlid": 25, 00:15:14.492 "qid": 0, 00:15:14.492 "state": "enabled", 00:15:14.492 "thread": "nvmf_tgt_poll_group_000", 00:15:14.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:14.492 "listen_address": { 00:15:14.492 "trtype": "RDMA", 00:15:14.492 "adrfam": "IPv4", 00:15:14.492 "traddr": "192.168.100.8", 00:15:14.492 "trsvcid": "4420" 00:15:14.492 }, 00:15:14.492 "peer_address": { 00:15:14.492 "trtype": "RDMA", 00:15:14.492 "adrfam": "IPv4", 00:15:14.492 "traddr": "192.168.100.8", 00:15:14.492 "trsvcid": "50442" 00:15:14.492 }, 00:15:14.492 "auth": { 00:15:14.492 "state": "completed", 00:15:14.492 "digest": "sha256", 00:15:14.492 "dhgroup": "ffdhe4096" 00:15:14.492 } 00:15:14.492 } 00:15:14.492 ]' 00:15:14.492 12:29:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.492 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.750 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:14.750 12:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.124 12:29:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.690 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.948 00:15:16.948 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.948 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.948 12:29:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.515 { 00:15:17.515 "cntlid": 27, 00:15:17.515 "qid": 0, 00:15:17.515 "state": "enabled", 00:15:17.515 "thread": "nvmf_tgt_poll_group_000", 00:15:17.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:17.515 "listen_address": { 00:15:17.515 "trtype": "RDMA", 00:15:17.515 "adrfam": "IPv4", 00:15:17.515 "traddr": "192.168.100.8", 00:15:17.515 "trsvcid": "4420" 00:15:17.515 }, 00:15:17.515 "peer_address": { 00:15:17.515 "trtype": "RDMA", 00:15:17.515 "adrfam": "IPv4", 00:15:17.515 "traddr": "192.168.100.8", 00:15:17.515 "trsvcid": "40636" 00:15:17.515 }, 00:15:17.515 "auth": { 00:15:17.515 "state": "completed", 00:15:17.515 "digest": "sha256", 00:15:17.515 "dhgroup": "ffdhe4096" 00:15:17.515 } 00:15:17.515 } 00:15:17.515 ]' 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.515 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.081 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:18.081 12:29:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.455 12:29:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.713 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.280 00:15:20.280 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.280 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.280 12:29:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.537 { 00:15:20.537 "cntlid": 29, 00:15:20.537 "qid": 0, 00:15:20.537 "state": "enabled", 00:15:20.537 "thread": "nvmf_tgt_poll_group_000", 00:15:20.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:20.537 "listen_address": { 00:15:20.537 "trtype": "RDMA", 00:15:20.537 "adrfam": "IPv4", 00:15:20.537 "traddr": "192.168.100.8", 00:15:20.537 "trsvcid": "4420" 00:15:20.537 }, 00:15:20.537 "peer_address": { 00:15:20.537 "trtype": "RDMA", 00:15:20.537 "adrfam": "IPv4", 00:15:20.537 "traddr": "192.168.100.8", 00:15:20.537 "trsvcid": "38184" 00:15:20.537 }, 00:15:20.537 "auth": { 00:15:20.537 "state": "completed", 00:15:20.537 "digest": "sha256", 00:15:20.537 "dhgroup": "ffdhe4096" 00:15:20.537 } 00:15:20.537 } 00:15:20.537 ]' 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.537 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.103 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:21.103 12:29:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:22.478 12:29:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.478 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.736 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.302 00:15:23.302 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.302 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.302 12:29:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.561 { 00:15:23.561 "cntlid": 31, 00:15:23.561 "qid": 0, 00:15:23.561 "state": "enabled", 00:15:23.561 "thread": "nvmf_tgt_poll_group_000", 00:15:23.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:23.561 "listen_address": { 00:15:23.561 "trtype": "RDMA", 00:15:23.561 "adrfam": "IPv4", 00:15:23.561 "traddr": "192.168.100.8", 00:15:23.561 "trsvcid": "4420" 00:15:23.561 }, 00:15:23.561 "peer_address": { 00:15:23.561 "trtype": "RDMA", 00:15:23.561 "adrfam": "IPv4", 00:15:23.561 "traddr": "192.168.100.8", 00:15:23.561 "trsvcid": "58945" 00:15:23.561 }, 00:15:23.561 "auth": { 00:15:23.561 "state": "completed", 00:15:23.561 "digest": "sha256", 00:15:23.561 "dhgroup": "ffdhe4096" 00:15:23.561 } 00:15:23.561 } 00:15:23.561 ]' 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.561 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.819 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.819 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.819 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.819 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.819 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.078 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:24.078 12:29:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.450 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.016 12:29:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.583 00:15:26.583 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.583 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.583 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.841 { 00:15:26.841 "cntlid": 33, 00:15:26.841 "qid": 0, 00:15:26.841 "state": "enabled", 00:15:26.841 "thread": "nvmf_tgt_poll_group_000", 00:15:26.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:26.841 "listen_address": { 00:15:26.841 "trtype": "RDMA", 00:15:26.841 "adrfam": "IPv4", 00:15:26.841 "traddr": "192.168.100.8", 00:15:26.841 "trsvcid": "4420" 00:15:26.841 }, 00:15:26.841 "peer_address": { 00:15:26.841 "trtype": "RDMA", 00:15:26.841 "adrfam": "IPv4", 00:15:26.841 "traddr": "192.168.100.8", 00:15:26.841 "trsvcid": "52463" 00:15:26.841 }, 00:15:26.841 "auth": { 00:15:26.841 "state": "completed", 00:15:26.841 "digest": "sha256", 00:15:26.841 "dhgroup": "ffdhe6144" 00:15:26.841 } 00:15:26.841 } 00:15:26.841 ]' 00:15:26.841 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.842 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.842 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.100 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.100 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.100 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.100 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.100 12:29:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.357 12:29:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:27.357 12:29:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.732 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.013 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.272 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.272 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.272 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.272 12:29:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.862 00:15:29.862 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.862 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.862 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.156 { 00:15:30.156 "cntlid": 35, 00:15:30.156 "qid": 0, 00:15:30.156 "state": "enabled", 00:15:30.156 "thread": "nvmf_tgt_poll_group_000", 00:15:30.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:30.156 "listen_address": { 00:15:30.156 "trtype": "RDMA", 00:15:30.156 "adrfam": "IPv4", 00:15:30.156 "traddr": "192.168.100.8", 00:15:30.156 "trsvcid": "4420" 00:15:30.156 }, 00:15:30.156 "peer_address": { 00:15:30.156 "trtype": "RDMA", 00:15:30.156 "adrfam": "IPv4", 00:15:30.156 "traddr": "192.168.100.8", 00:15:30.156 "trsvcid": "55056" 00:15:30.156 }, 00:15:30.156 "auth": { 00:15:30.156 "state": "completed", 00:15:30.156 "digest": "sha256", 00:15:30.156 "dhgroup": "ffdhe6144" 00:15:30.156 } 00:15:30.156 } 00:15:30.156 ]' 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.156 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.421 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.421 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.421 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.421 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.421 12:29:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.680 12:29:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:30.680 12:29:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.055 12:29:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.314 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.572 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.572 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.572 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.572 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.139 00:15:33.139 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.139 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.139 12:29:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.397 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.397 { 00:15:33.397 "cntlid": 37, 00:15:33.397 "qid": 0, 00:15:33.397 "state": "enabled", 00:15:33.397 "thread": "nvmf_tgt_poll_group_000", 00:15:33.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:33.397 "listen_address": { 00:15:33.397 "trtype": "RDMA", 00:15:33.397 "adrfam": "IPv4", 00:15:33.397 "traddr": "192.168.100.8", 00:15:33.397 "trsvcid": "4420" 00:15:33.397 }, 00:15:33.397 "peer_address": { 00:15:33.397 "trtype": "RDMA", 00:15:33.397 "adrfam": "IPv4", 00:15:33.397 "traddr": "192.168.100.8", 00:15:33.397 "trsvcid": "34058" 00:15:33.398 }, 00:15:33.398 "auth": { 00:15:33.398 "state": "completed", 00:15:33.398 "digest": "sha256", 00:15:33.398 "dhgroup": "ffdhe6144" 00:15:33.398 } 00:15:33.398 } 00:15:33.398 ]' 00:15:33.398 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.654 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.912 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:33.912 12:29:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:35.287 12:29:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:35.287 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.853 12:29:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.421 00:15:36.421 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.421 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.421 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.679 { 00:15:36.679 "cntlid": 39, 00:15:36.679 "qid": 0, 00:15:36.679 "state": "enabled", 00:15:36.679 "thread": "nvmf_tgt_poll_group_000", 00:15:36.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:36.679 "listen_address": { 00:15:36.679 "trtype": "RDMA", 00:15:36.679 "adrfam": "IPv4", 00:15:36.679 "traddr": "192.168.100.8", 00:15:36.679 "trsvcid": "4420" 00:15:36.679 }, 00:15:36.679 "peer_address": { 00:15:36.679 "trtype": "RDMA", 00:15:36.679 "adrfam": "IPv4", 00:15:36.679 "traddr": "192.168.100.8", 00:15:36.679 "trsvcid": "57133" 00:15:36.679 }, 00:15:36.679 "auth": { 00:15:36.679 "state": "completed", 00:15:36.679 "digest": "sha256", 00:15:36.679 "dhgroup": "ffdhe6144" 00:15:36.679 } 00:15:36.679 } 00:15:36.679 ]' 00:15:36.679 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.938 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.197 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:37.197 12:29:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.572 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.138 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.139 12:29:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.073 00:15:40.073 12:29:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.073 12:29:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.073 12:29:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.332 { 00:15:40.332 "cntlid": 41, 00:15:40.332 "qid": 0, 00:15:40.332 "state": "enabled", 00:15:40.332 "thread": "nvmf_tgt_poll_group_000", 00:15:40.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:40.332 "listen_address": { 00:15:40.332 "trtype": "RDMA", 00:15:40.332 "adrfam": "IPv4", 00:15:40.332 "traddr": "192.168.100.8", 00:15:40.332 "trsvcid": "4420" 00:15:40.332 }, 00:15:40.332 "peer_address": { 00:15:40.332 "trtype": "RDMA", 00:15:40.332 "adrfam": "IPv4", 00:15:40.332 "traddr": "192.168.100.8", 00:15:40.332 "trsvcid": "60313" 00:15:40.332 }, 00:15:40.332 "auth": { 00:15:40.332 "state": "completed", 00:15:40.332 "digest": "sha256", 00:15:40.332 "dhgroup": "ffdhe8192" 00:15:40.332 } 00:15:40.332 } 00:15:40.332 ]' 00:15:40.332 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.590 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.849 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:40.849 12:29:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.222 12:29:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.789 12:29:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.724 00:15:43.724 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.724 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.725 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.983 { 00:15:43.983 "cntlid": 43, 00:15:43.983 "qid": 0, 00:15:43.983 "state": "enabled", 00:15:43.983 "thread": "nvmf_tgt_poll_group_000", 00:15:43.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:43.983 "listen_address": { 00:15:43.983 "trtype": "RDMA", 00:15:43.983 "adrfam": "IPv4", 00:15:43.983 "traddr": "192.168.100.8", 00:15:43.983 "trsvcid": "4420" 00:15:43.983 }, 00:15:43.983 "peer_address": { 00:15:43.983 "trtype": "RDMA", 00:15:43.983 "adrfam": "IPv4", 00:15:43.983 "traddr": "192.168.100.8", 00:15:43.983 "trsvcid": "44321" 00:15:43.983 }, 00:15:43.983 "auth": { 00:15:43.983 "state": "completed", 00:15:43.983 "digest": "sha256", 00:15:43.983 "dhgroup": "ffdhe8192" 00:15:43.983 } 00:15:43.983 } 00:15:43.983 ]' 00:15:43.983 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.242 12:29:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.501 12:29:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:44.501 12:29:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:45.876 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.135 12:29:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.393 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.394 12:29:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.329 00:15:47.329 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.329 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.329 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.895 { 00:15:47.895 "cntlid": 45, 00:15:47.895 "qid": 0, 00:15:47.895 "state": "enabled", 00:15:47.895 "thread": "nvmf_tgt_poll_group_000", 00:15:47.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:47.895 "listen_address": { 00:15:47.895 "trtype": "RDMA", 00:15:47.895 "adrfam": "IPv4", 00:15:47.895 "traddr": "192.168.100.8", 00:15:47.895 "trsvcid": "4420" 00:15:47.895 }, 00:15:47.895 "peer_address": { 00:15:47.895 "trtype": "RDMA", 00:15:47.895 "adrfam": "IPv4", 00:15:47.895 "traddr": "192.168.100.8", 00:15:47.895 "trsvcid": "41020" 00:15:47.895 }, 00:15:47.895 "auth": { 00:15:47.895 "state": "completed", 00:15:47.895 "digest": "sha256", 00:15:47.895 "dhgroup": "ffdhe8192" 00:15:47.895 } 00:15:47.895 } 00:15:47.895 ]' 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.895 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.896 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.461 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:48.461 12:29:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.837 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.095 12:29:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.029 00:15:51.288 12:29:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.288 12:29:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.288 12:29:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.546 { 00:15:51.546 "cntlid": 47, 00:15:51.546 "qid": 0, 00:15:51.546 "state": "enabled", 00:15:51.546 "thread": "nvmf_tgt_poll_group_000", 00:15:51.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:51.546 "listen_address": { 00:15:51.546 "trtype": "RDMA", 00:15:51.546 "adrfam": "IPv4", 00:15:51.546 "traddr": "192.168.100.8", 00:15:51.546 "trsvcid": "4420" 00:15:51.546 }, 00:15:51.546 "peer_address": { 00:15:51.546 "trtype": "RDMA", 00:15:51.546 "adrfam": "IPv4", 00:15:51.546 "traddr": "192.168.100.8", 00:15:51.546 "trsvcid": "35269" 00:15:51.546 }, 00:15:51.546 "auth": { 00:15:51.546 "state": "completed", 00:15:51.546 "digest": "sha256", 00:15:51.546 "dhgroup": "ffdhe8192" 00:15:51.546 } 00:15:51.546 } 00:15:51.546 ]' 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.546 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.113 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:52.113 12:29:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:15:53.488 12:29:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.488 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.746 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.341 00:15:54.341 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.341 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.341 12:29:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.619 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.619 { 00:15:54.619 "cntlid": 49, 00:15:54.619 "qid": 0, 00:15:54.619 "state": "enabled", 00:15:54.619 "thread": "nvmf_tgt_poll_group_000", 00:15:54.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:54.619 "listen_address": { 00:15:54.619 "trtype": "RDMA", 00:15:54.619 "adrfam": "IPv4", 00:15:54.619 "traddr": "192.168.100.8", 00:15:54.619 "trsvcid": "4420" 00:15:54.619 }, 00:15:54.619 "peer_address": { 00:15:54.619 "trtype": "RDMA", 00:15:54.619 "adrfam": "IPv4", 00:15:54.619 "traddr": "192.168.100.8", 00:15:54.619 "trsvcid": "37607" 00:15:54.619 }, 00:15:54.619 "auth": { 00:15:54.619 "state": "completed", 00:15:54.619 "digest": "sha384", 00:15:54.620 "dhgroup": "null" 00:15:54.620 } 00:15:54.620 } 00:15:54.620 ]' 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.620 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.883 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:54.883 12:30:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.257 12:30:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.823 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.824 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.083 00:15:57.083 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.083 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.083 12:30:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.341 { 00:15:57.341 "cntlid": 51, 00:15:57.341 "qid": 0, 00:15:57.341 "state": "enabled", 00:15:57.341 "thread": "nvmf_tgt_poll_group_000", 00:15:57.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:15:57.341 "listen_address": { 00:15:57.341 "trtype": "RDMA", 00:15:57.341 "adrfam": "IPv4", 00:15:57.341 "traddr": "192.168.100.8", 00:15:57.341 "trsvcid": "4420" 00:15:57.341 }, 00:15:57.341 "peer_address": { 00:15:57.341 "trtype": "RDMA", 00:15:57.341 "adrfam": "IPv4", 00:15:57.341 "traddr": "192.168.100.8", 00:15:57.341 "trsvcid": "43665" 00:15:57.341 }, 00:15:57.341 "auth": { 00:15:57.341 "state": "completed", 00:15:57.341 "digest": "sha384", 00:15:57.341 "dhgroup": "null" 00:15:57.341 } 00:15:57.341 } 00:15:57.341 ]' 00:15:57.341 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.600 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.858 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:57.858 12:30:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:15:59.232 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.232 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:15:59.232 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.233 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.233 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.233 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.233 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.233 12:30:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.798 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.057 00:16:00.057 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.057 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.057 12:30:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.318 { 00:16:00.318 "cntlid": 53, 00:16:00.318 "qid": 0, 00:16:00.318 "state": "enabled", 00:16:00.318 "thread": "nvmf_tgt_poll_group_000", 00:16:00.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:00.318 "listen_address": { 00:16:00.318 "trtype": "RDMA", 00:16:00.318 "adrfam": "IPv4", 00:16:00.318 "traddr": "192.168.100.8", 00:16:00.318 "trsvcid": "4420" 00:16:00.318 }, 00:16:00.318 "peer_address": { 00:16:00.318 "trtype": "RDMA", 00:16:00.318 "adrfam": "IPv4", 00:16:00.318 "traddr": "192.168.100.8", 00:16:00.318 "trsvcid": "58910" 00:16:00.318 }, 00:16:00.318 "auth": { 00:16:00.318 "state": "completed", 00:16:00.318 "digest": "sha384", 00:16:00.318 "dhgroup": "null" 00:16:00.318 } 00:16:00.318 } 00:16:00.318 ]' 00:16:00.318 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.576 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.835 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:00.835 12:30:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.211 12:30:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.777 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.036 00:16:03.036 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.036 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.036 12:30:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.295 { 00:16:03.295 "cntlid": 55, 00:16:03.295 "qid": 0, 00:16:03.295 "state": "enabled", 00:16:03.295 "thread": "nvmf_tgt_poll_group_000", 00:16:03.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:03.295 "listen_address": { 00:16:03.295 "trtype": "RDMA", 00:16:03.295 "adrfam": "IPv4", 00:16:03.295 "traddr": "192.168.100.8", 00:16:03.295 "trsvcid": "4420" 00:16:03.295 }, 00:16:03.295 "peer_address": { 00:16:03.295 "trtype": "RDMA", 00:16:03.295 "adrfam": "IPv4", 00:16:03.295 "traddr": "192.168.100.8", 00:16:03.295 "trsvcid": "35559" 00:16:03.295 }, 00:16:03.295 "auth": { 00:16:03.295 "state": "completed", 00:16:03.295 "digest": "sha384", 00:16:03.295 "dhgroup": "null" 00:16:03.295 } 00:16:03.295 } 00:16:03.295 ]' 00:16:03.295 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.553 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.811 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:03.812 12:30:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.186 12:30:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.754 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.755 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.014 00:16:06.014 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.014 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.014 12:30:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.272 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.272 { 00:16:06.272 "cntlid": 57, 00:16:06.272 "qid": 0, 00:16:06.272 "state": "enabled", 00:16:06.272 "thread": "nvmf_tgt_poll_group_000", 00:16:06.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:06.272 "listen_address": { 00:16:06.272 "trtype": "RDMA", 00:16:06.272 "adrfam": "IPv4", 00:16:06.272 "traddr": "192.168.100.8", 00:16:06.272 "trsvcid": "4420" 00:16:06.272 }, 00:16:06.272 "peer_address": { 00:16:06.272 "trtype": "RDMA", 00:16:06.272 "adrfam": "IPv4", 00:16:06.272 "traddr": "192.168.100.8", 00:16:06.272 "trsvcid": "34107" 00:16:06.272 }, 00:16:06.272 "auth": { 00:16:06.273 "state": "completed", 00:16:06.273 "digest": "sha384", 00:16:06.273 "dhgroup": "ffdhe2048" 00:16:06.273 } 00:16:06.273 } 00:16:06.273 ]' 00:16:06.273 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.533 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.533 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.533 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.533 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.534 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.534 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.534 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.792 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:06.793 12:30:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.168 12:30:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.734 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.992 00:16:08.992 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.992 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.992 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.250 12:30:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.250 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.250 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.250 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.250 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.508 { 00:16:09.508 "cntlid": 59, 00:16:09.508 "qid": 0, 00:16:09.508 "state": "enabled", 00:16:09.508 "thread": "nvmf_tgt_poll_group_000", 00:16:09.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:09.508 "listen_address": { 00:16:09.508 "trtype": "RDMA", 00:16:09.508 "adrfam": "IPv4", 00:16:09.508 "traddr": "192.168.100.8", 00:16:09.508 "trsvcid": "4420" 00:16:09.508 }, 00:16:09.508 "peer_address": { 00:16:09.508 "trtype": "RDMA", 00:16:09.508 "adrfam": "IPv4", 00:16:09.508 "traddr": "192.168.100.8", 00:16:09.508 "trsvcid": "53176" 00:16:09.508 }, 00:16:09.508 "auth": { 00:16:09.508 "state": "completed", 00:16:09.508 "digest": "sha384", 00:16:09.508 "dhgroup": "ffdhe2048" 00:16:09.508 } 00:16:09.508 } 00:16:09.508 ]' 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.508 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.766 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:09.766 12:30:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.143 12:30:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.709 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.967 00:16:11.967 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.967 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.967 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.226 { 00:16:12.226 "cntlid": 61, 00:16:12.226 "qid": 0, 00:16:12.226 "state": "enabled", 00:16:12.226 "thread": "nvmf_tgt_poll_group_000", 00:16:12.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:12.226 "listen_address": { 00:16:12.226 "trtype": "RDMA", 00:16:12.226 "adrfam": "IPv4", 00:16:12.226 "traddr": "192.168.100.8", 00:16:12.226 "trsvcid": "4420" 00:16:12.226 }, 00:16:12.226 "peer_address": { 00:16:12.226 "trtype": "RDMA", 00:16:12.226 "adrfam": "IPv4", 00:16:12.226 "traddr": "192.168.100.8", 00:16:12.226 "trsvcid": "55256" 00:16:12.226 }, 00:16:12.226 "auth": { 00:16:12.226 "state": "completed", 00:16:12.226 "digest": "sha384", 00:16:12.226 "dhgroup": "ffdhe2048" 00:16:12.226 } 00:16:12.226 } 00:16:12.226 ]' 00:16:12.226 12:30:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.484 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.742 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:12.742 12:30:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.118 12:30:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.685 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.944 00:16:14.944 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.944 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.944 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.202 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.202 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.202 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.202 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.460 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.460 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.460 { 00:16:15.460 "cntlid": 63, 00:16:15.460 "qid": 0, 00:16:15.460 "state": "enabled", 00:16:15.460 "thread": "nvmf_tgt_poll_group_000", 00:16:15.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:15.460 "listen_address": { 00:16:15.460 "trtype": "RDMA", 00:16:15.460 "adrfam": "IPv4", 00:16:15.460 "traddr": "192.168.100.8", 00:16:15.460 "trsvcid": "4420" 00:16:15.460 }, 00:16:15.460 "peer_address": { 00:16:15.460 "trtype": "RDMA", 00:16:15.460 "adrfam": "IPv4", 00:16:15.460 "traddr": "192.168.100.8", 00:16:15.460 "trsvcid": "40429" 00:16:15.460 }, 00:16:15.460 "auth": { 00:16:15.460 "state": "completed", 00:16:15.460 "digest": "sha384", 00:16:15.460 "dhgroup": "ffdhe2048" 00:16:15.460 } 00:16:15.460 } 00:16:15.460 ]' 00:16:15.460 12:30:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.460 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.024 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:16.024 12:30:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.395 12:30:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.652 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.653 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.653 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.910 00:16:18.168 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.168 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.168 12:30:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.426 { 00:16:18.426 "cntlid": 65, 00:16:18.426 "qid": 0, 00:16:18.426 "state": "enabled", 00:16:18.426 "thread": "nvmf_tgt_poll_group_000", 00:16:18.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:18.426 "listen_address": { 00:16:18.426 "trtype": "RDMA", 00:16:18.426 "adrfam": "IPv4", 00:16:18.426 "traddr": "192.168.100.8", 00:16:18.426 "trsvcid": "4420" 00:16:18.426 }, 00:16:18.426 "peer_address": { 00:16:18.426 "trtype": "RDMA", 00:16:18.426 "adrfam": "IPv4", 00:16:18.426 "traddr": "192.168.100.8", 00:16:18.426 "trsvcid": "60176" 00:16:18.426 }, 00:16:18.426 "auth": { 00:16:18.426 "state": "completed", 00:16:18.426 "digest": "sha384", 00:16:18.426 "dhgroup": "ffdhe3072" 00:16:18.426 } 00:16:18.426 } 00:16:18.426 ]' 00:16:18.426 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.427 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.993 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:18.993 12:30:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.367 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.368 12:30:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.626 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.194 00:16:21.194 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.194 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.194 12:30:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.455 { 00:16:21.455 "cntlid": 67, 00:16:21.455 "qid": 0, 00:16:21.455 "state": "enabled", 00:16:21.455 "thread": "nvmf_tgt_poll_group_000", 00:16:21.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:21.455 "listen_address": { 00:16:21.455 "trtype": "RDMA", 00:16:21.455 "adrfam": "IPv4", 00:16:21.455 "traddr": "192.168.100.8", 00:16:21.455 "trsvcid": "4420" 00:16:21.455 }, 00:16:21.455 "peer_address": { 00:16:21.455 "trtype": "RDMA", 00:16:21.455 "adrfam": "IPv4", 00:16:21.455 "traddr": "192.168.100.8", 00:16:21.455 "trsvcid": "60366" 00:16:21.455 }, 00:16:21.455 "auth": { 00:16:21.455 "state": "completed", 00:16:21.455 "digest": "sha384", 00:16:21.455 "dhgroup": "ffdhe3072" 00:16:21.455 } 00:16:21.455 } 00:16:21.455 ]' 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.455 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.751 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.751 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.751 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.033 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:22.033 12:30:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.409 12:30:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.409 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.666 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.231 00:16:24.231 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.231 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.231 12:30:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.489 { 00:16:24.489 "cntlid": 69, 00:16:24.489 "qid": 0, 00:16:24.489 "state": "enabled", 00:16:24.489 "thread": "nvmf_tgt_poll_group_000", 00:16:24.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:24.489 "listen_address": { 00:16:24.489 "trtype": "RDMA", 00:16:24.489 "adrfam": "IPv4", 00:16:24.489 "traddr": "192.168.100.8", 00:16:24.489 "trsvcid": "4420" 00:16:24.489 }, 00:16:24.489 "peer_address": { 00:16:24.489 "trtype": "RDMA", 00:16:24.489 "adrfam": "IPv4", 00:16:24.489 "traddr": "192.168.100.8", 00:16:24.489 "trsvcid": "47742" 00:16:24.489 }, 00:16:24.489 "auth": { 00:16:24.489 "state": "completed", 00:16:24.489 "digest": "sha384", 00:16:24.489 "dhgroup": "ffdhe3072" 00:16:24.489 } 00:16:24.489 } 00:16:24.489 ]' 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.489 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.748 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.748 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.748 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.006 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:25.006 12:30:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:26.378 12:30:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.378 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.636 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.201 00:16:27.201 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.201 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.201 12:30:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.459 { 00:16:27.459 "cntlid": 71, 00:16:27.459 "qid": 0, 00:16:27.459 "state": "enabled", 00:16:27.459 "thread": "nvmf_tgt_poll_group_000", 00:16:27.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:27.459 "listen_address": { 00:16:27.459 "trtype": "RDMA", 00:16:27.459 "adrfam": "IPv4", 00:16:27.459 "traddr": "192.168.100.8", 00:16:27.459 "trsvcid": "4420" 00:16:27.459 }, 00:16:27.459 "peer_address": { 00:16:27.459 "trtype": "RDMA", 00:16:27.459 "adrfam": "IPv4", 00:16:27.459 "traddr": "192.168.100.8", 00:16:27.459 "trsvcid": "53500" 00:16:27.459 }, 00:16:27.459 "auth": { 00:16:27.459 "state": "completed", 00:16:27.459 "digest": "sha384", 00:16:27.459 "dhgroup": "ffdhe3072" 00:16:27.459 } 00:16:27.459 } 00:16:27.459 ]' 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.459 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.717 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.717 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.717 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.717 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.717 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.975 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:27.975 12:30:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:29.350 12:30:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.350 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:29.350 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.350 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.607 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.608 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.608 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.608 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.608 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.866 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.433 00:16:30.433 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.433 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.433 12:30:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.691 { 00:16:30.691 "cntlid": 73, 00:16:30.691 "qid": 0, 00:16:30.691 "state": "enabled", 00:16:30.691 "thread": "nvmf_tgt_poll_group_000", 00:16:30.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:30.691 "listen_address": { 00:16:30.691 "trtype": "RDMA", 00:16:30.691 "adrfam": "IPv4", 00:16:30.691 "traddr": "192.168.100.8", 00:16:30.691 "trsvcid": "4420" 00:16:30.691 }, 00:16:30.691 "peer_address": { 00:16:30.691 "trtype": "RDMA", 00:16:30.691 "adrfam": "IPv4", 00:16:30.691 "traddr": "192.168.100.8", 00:16:30.691 "trsvcid": "38065" 00:16:30.691 }, 00:16:30.691 "auth": { 00:16:30.691 "state": "completed", 00:16:30.691 "digest": "sha384", 00:16:30.691 "dhgroup": "ffdhe4096" 00:16:30.691 } 00:16:30.691 } 00:16:30.691 ]' 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.691 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.257 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:31.257 12:30:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.660 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.919 12:30:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.484 00:16:33.484 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.484 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.484 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.742 { 00:16:33.742 "cntlid": 75, 00:16:33.742 "qid": 0, 00:16:33.742 "state": "enabled", 00:16:33.742 "thread": "nvmf_tgt_poll_group_000", 00:16:33.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:33.742 "listen_address": { 00:16:33.742 "trtype": "RDMA", 00:16:33.742 "adrfam": "IPv4", 00:16:33.742 "traddr": "192.168.100.8", 00:16:33.742 "trsvcid": "4420" 00:16:33.742 }, 00:16:33.742 "peer_address": { 00:16:33.742 "trtype": "RDMA", 00:16:33.742 "adrfam": "IPv4", 00:16:33.742 "traddr": "192.168.100.8", 00:16:33.742 "trsvcid": "44677" 00:16:33.742 }, 00:16:33.742 "auth": { 00:16:33.742 "state": "completed", 00:16:33.742 "digest": "sha384", 00:16:33.742 "dhgroup": "ffdhe4096" 00:16:33.742 } 00:16:33.742 } 00:16:33.742 ]' 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.742 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.001 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.001 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.001 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.001 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.001 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.259 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:34.260 12:30:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.636 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.202 12:30:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.460 00:16:36.460 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.460 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.460 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.025 { 00:16:37.025 "cntlid": 77, 00:16:37.025 "qid": 0, 00:16:37.025 "state": "enabled", 00:16:37.025 "thread": "nvmf_tgt_poll_group_000", 00:16:37.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:37.025 "listen_address": { 00:16:37.025 "trtype": "RDMA", 00:16:37.025 "adrfam": "IPv4", 00:16:37.025 "traddr": "192.168.100.8", 00:16:37.025 "trsvcid": "4420" 00:16:37.025 }, 00:16:37.025 "peer_address": { 00:16:37.025 "trtype": "RDMA", 00:16:37.025 "adrfam": "IPv4", 00:16:37.025 "traddr": "192.168.100.8", 00:16:37.025 "trsvcid": "58656" 00:16:37.025 }, 00:16:37.025 "auth": { 00:16:37.025 "state": "completed", 00:16:37.025 "digest": "sha384", 00:16:37.025 "dhgroup": "ffdhe4096" 00:16:37.025 } 00:16:37.025 } 00:16:37.025 ]' 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.025 12:30:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.283 12:30:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:37.283 12:30:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:38.658 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.916 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.185 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:39.185 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.185 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.185 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.185 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.186 12:30:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.752 00:16:39.753 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.753 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.753 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.011 { 00:16:40.011 "cntlid": 79, 00:16:40.011 "qid": 0, 00:16:40.011 "state": "enabled", 00:16:40.011 "thread": "nvmf_tgt_poll_group_000", 00:16:40.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:40.011 "listen_address": { 00:16:40.011 "trtype": "RDMA", 00:16:40.011 "adrfam": "IPv4", 00:16:40.011 "traddr": "192.168.100.8", 00:16:40.011 "trsvcid": "4420" 00:16:40.011 }, 00:16:40.011 "peer_address": { 00:16:40.011 "trtype": "RDMA", 00:16:40.011 "adrfam": "IPv4", 00:16:40.011 "traddr": "192.168.100.8", 00:16:40.011 "trsvcid": "38855" 00:16:40.011 }, 00:16:40.011 "auth": { 00:16:40.011 "state": "completed", 00:16:40.011 "digest": "sha384", 00:16:40.011 "dhgroup": "ffdhe4096" 00:16:40.011 } 00:16:40.011 } 00:16:40.011 ]' 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.011 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.268 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.269 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.269 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.269 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.269 12:30:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.526 12:30:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:40.526 12:30:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.901 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.159 12:30:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.096 00:16:43.096 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.096 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.096 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.355 { 00:16:43.355 "cntlid": 81, 00:16:43.355 "qid": 0, 00:16:43.355 "state": "enabled", 00:16:43.355 "thread": "nvmf_tgt_poll_group_000", 00:16:43.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:43.355 "listen_address": { 00:16:43.355 "trtype": "RDMA", 00:16:43.355 "adrfam": "IPv4", 00:16:43.355 "traddr": "192.168.100.8", 00:16:43.355 "trsvcid": "4420" 00:16:43.355 }, 00:16:43.355 "peer_address": { 00:16:43.355 "trtype": "RDMA", 00:16:43.355 "adrfam": "IPv4", 00:16:43.355 "traddr": "192.168.100.8", 00:16:43.355 "trsvcid": "33990" 00:16:43.355 }, 00:16:43.355 "auth": { 00:16:43.355 "state": "completed", 00:16:43.355 "digest": "sha384", 00:16:43.355 "dhgroup": "ffdhe6144" 00:16:43.355 } 00:16:43.355 } 00:16:43.355 ]' 00:16:43.355 12:30:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.355 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.356 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.919 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:43.919 12:30:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.293 12:30:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.552 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.118 00:16:46.376 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.376 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.376 12:30:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.634 { 00:16:46.634 "cntlid": 83, 00:16:46.634 "qid": 0, 00:16:46.634 "state": "enabled", 00:16:46.634 "thread": "nvmf_tgt_poll_group_000", 00:16:46.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:46.634 "listen_address": { 00:16:46.634 "trtype": "RDMA", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "192.168.100.8", 00:16:46.634 "trsvcid": "4420" 00:16:46.634 }, 00:16:46.634 "peer_address": { 00:16:46.634 "trtype": "RDMA", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "192.168.100.8", 00:16:46.634 "trsvcid": "42609" 00:16:46.634 }, 00:16:46.634 "auth": { 00:16:46.634 "state": "completed", 00:16:46.634 "digest": "sha384", 00:16:46.634 "dhgroup": "ffdhe6144" 00:16:46.634 } 00:16:46.634 } 00:16:46.634 ]' 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.634 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.202 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:47.202 12:30:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.600 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.858 12:30:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.426 00:16:49.426 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.426 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.426 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.992 { 00:16:49.992 "cntlid": 85, 00:16:49.992 "qid": 0, 00:16:49.992 "state": "enabled", 00:16:49.992 "thread": "nvmf_tgt_poll_group_000", 00:16:49.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:49.992 "listen_address": { 00:16:49.992 "trtype": "RDMA", 00:16:49.992 "adrfam": "IPv4", 00:16:49.992 "traddr": "192.168.100.8", 00:16:49.992 "trsvcid": "4420" 00:16:49.992 }, 00:16:49.992 "peer_address": { 00:16:49.992 "trtype": "RDMA", 00:16:49.992 "adrfam": "IPv4", 00:16:49.992 "traddr": "192.168.100.8", 00:16:49.992 "trsvcid": "52307" 00:16:49.992 }, 00:16:49.992 "auth": { 00:16:49.992 "state": "completed", 00:16:49.992 "digest": "sha384", 00:16:49.992 "dhgroup": "ffdhe6144" 00:16:49.992 } 00:16:49.992 } 00:16:49.992 ]' 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.992 12:30:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.558 12:30:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:50.558 12:30:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.932 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.191 12:30:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.758 00:16:52.758 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.758 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.758 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.324 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.324 { 00:16:53.324 "cntlid": 87, 00:16:53.324 "qid": 0, 00:16:53.324 "state": "enabled", 00:16:53.324 "thread": "nvmf_tgt_poll_group_000", 00:16:53.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:53.324 "listen_address": { 00:16:53.324 "trtype": "RDMA", 00:16:53.324 "adrfam": "IPv4", 00:16:53.324 "traddr": "192.168.100.8", 00:16:53.324 "trsvcid": "4420" 00:16:53.324 }, 00:16:53.325 "peer_address": { 00:16:53.325 "trtype": "RDMA", 00:16:53.325 "adrfam": "IPv4", 00:16:53.325 "traddr": "192.168.100.8", 00:16:53.325 "trsvcid": "33998" 00:16:53.325 }, 00:16:53.325 "auth": { 00:16:53.325 "state": "completed", 00:16:53.325 "digest": "sha384", 00:16:53.325 "dhgroup": "ffdhe6144" 00:16:53.325 } 00:16:53.325 } 00:16:53.325 ]' 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.325 12:30:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.592 12:30:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:53.592 12:30:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:16:54.973 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.973 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:54.973 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.973 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.231 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.231 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.231 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.231 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.231 12:31:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.489 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.490 12:31:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.423 00:16:56.423 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.423 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.423 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.988 { 00:16:56.988 "cntlid": 89, 00:16:56.988 "qid": 0, 00:16:56.988 "state": "enabled", 00:16:56.988 "thread": "nvmf_tgt_poll_group_000", 00:16:56.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:16:56.988 "listen_address": { 00:16:56.988 "trtype": "RDMA", 00:16:56.988 "adrfam": "IPv4", 00:16:56.988 "traddr": "192.168.100.8", 00:16:56.988 "trsvcid": "4420" 00:16:56.988 }, 00:16:56.988 "peer_address": { 00:16:56.988 "trtype": "RDMA", 00:16:56.988 "adrfam": "IPv4", 00:16:56.988 "traddr": "192.168.100.8", 00:16:56.988 "trsvcid": "36431" 00:16:56.988 }, 00:16:56.988 "auth": { 00:16:56.988 "state": "completed", 00:16:56.988 "digest": "sha384", 00:16:56.988 "dhgroup": "ffdhe8192" 00:16:56.988 } 00:16:56.988 } 00:16:56.988 ]' 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.988 12:31:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.245 12:31:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:57.245 12:31:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.620 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.878 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.878 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.137 12:31:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.073 00:17:00.073 12:31:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.073 12:31:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.073 12:31:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.639 { 00:17:00.639 "cntlid": 91, 00:17:00.639 "qid": 0, 00:17:00.639 "state": "enabled", 00:17:00.639 "thread": "nvmf_tgt_poll_group_000", 00:17:00.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:00.639 "listen_address": { 00:17:00.639 "trtype": "RDMA", 00:17:00.639 "adrfam": "IPv4", 00:17:00.639 "traddr": "192.168.100.8", 00:17:00.639 "trsvcid": "4420" 00:17:00.639 }, 00:17:00.639 "peer_address": { 00:17:00.639 "trtype": "RDMA", 00:17:00.639 "adrfam": "IPv4", 00:17:00.639 "traddr": "192.168.100.8", 00:17:00.639 "trsvcid": "46924" 00:17:00.639 }, 00:17:00.639 "auth": { 00:17:00.639 "state": "completed", 00:17:00.639 "digest": "sha384", 00:17:00.639 "dhgroup": "ffdhe8192" 00:17:00.639 } 00:17:00.639 } 00:17:00.639 ]' 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.639 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.897 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:00.897 12:31:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:02.271 12:31:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.530 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.789 12:31:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.723 00:17:03.723 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.723 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.723 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.290 { 00:17:04.290 "cntlid": 93, 00:17:04.290 "qid": 0, 00:17:04.290 "state": "enabled", 00:17:04.290 "thread": "nvmf_tgt_poll_group_000", 00:17:04.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:04.290 "listen_address": { 00:17:04.290 "trtype": "RDMA", 00:17:04.290 "adrfam": "IPv4", 00:17:04.290 "traddr": "192.168.100.8", 00:17:04.290 "trsvcid": "4420" 00:17:04.290 }, 00:17:04.290 "peer_address": { 00:17:04.290 "trtype": "RDMA", 00:17:04.290 "adrfam": "IPv4", 00:17:04.290 "traddr": "192.168.100.8", 00:17:04.290 "trsvcid": "59541" 00:17:04.290 }, 00:17:04.290 "auth": { 00:17:04.290 "state": "completed", 00:17:04.290 "digest": "sha384", 00:17:04.290 "dhgroup": "ffdhe8192" 00:17:04.290 } 00:17:04.290 } 00:17:04.290 ]' 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.290 12:31:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.548 12:31:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:04.548 12:31:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:05.924 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.182 12:31:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.441 12:31:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.377 00:17:07.636 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.636 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.636 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.895 { 00:17:07.895 "cntlid": 95, 00:17:07.895 "qid": 0, 00:17:07.895 "state": "enabled", 00:17:07.895 "thread": "nvmf_tgt_poll_group_000", 00:17:07.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:07.895 "listen_address": { 00:17:07.895 "trtype": "RDMA", 00:17:07.895 "adrfam": "IPv4", 00:17:07.895 "traddr": "192.168.100.8", 00:17:07.895 "trsvcid": "4420" 00:17:07.895 }, 00:17:07.895 "peer_address": { 00:17:07.895 "trtype": "RDMA", 00:17:07.895 "adrfam": "IPv4", 00:17:07.895 "traddr": "192.168.100.8", 00:17:07.895 "trsvcid": "33922" 00:17:07.895 }, 00:17:07.895 "auth": { 00:17:07.895 "state": "completed", 00:17:07.895 "digest": "sha384", 00:17:07.895 "dhgroup": "ffdhe8192" 00:17:07.895 } 00:17:07.895 } 00:17:07.895 ]' 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.895 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.461 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:08.461 12:31:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.838 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.097 12:31:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.663 00:17:10.663 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.663 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.664 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.922 { 00:17:10.922 "cntlid": 97, 00:17:10.922 "qid": 0, 00:17:10.922 "state": "enabled", 00:17:10.922 "thread": "nvmf_tgt_poll_group_000", 00:17:10.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:10.922 "listen_address": { 00:17:10.922 "trtype": "RDMA", 00:17:10.922 "adrfam": "IPv4", 00:17:10.922 "traddr": "192.168.100.8", 00:17:10.922 "trsvcid": "4420" 00:17:10.922 }, 00:17:10.922 "peer_address": { 00:17:10.922 "trtype": "RDMA", 00:17:10.922 "adrfam": "IPv4", 00:17:10.922 "traddr": "192.168.100.8", 00:17:10.922 "trsvcid": "43928" 00:17:10.922 }, 00:17:10.922 "auth": { 00:17:10.922 "state": "completed", 00:17:10.922 "digest": "sha512", 00:17:10.922 "dhgroup": "null" 00:17:10.922 } 00:17:10.922 } 00:17:10.922 ]' 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.922 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.489 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:11.489 12:31:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.863 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.121 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.122 12:31:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.380 00:17:13.637 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.637 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.637 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.894 { 00:17:13.894 "cntlid": 99, 00:17:13.894 "qid": 0, 00:17:13.894 "state": "enabled", 00:17:13.894 "thread": "nvmf_tgt_poll_group_000", 00:17:13.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:13.894 "listen_address": { 00:17:13.894 "trtype": "RDMA", 00:17:13.894 "adrfam": "IPv4", 00:17:13.894 "traddr": "192.168.100.8", 00:17:13.894 "trsvcid": "4420" 00:17:13.894 }, 00:17:13.894 "peer_address": { 00:17:13.894 "trtype": "RDMA", 00:17:13.894 "adrfam": "IPv4", 00:17:13.894 "traddr": "192.168.100.8", 00:17:13.894 "trsvcid": "37211" 00:17:13.894 }, 00:17:13.894 "auth": { 00:17:13.894 "state": "completed", 00:17:13.894 "digest": "sha512", 00:17:13.894 "dhgroup": "null" 00:17:13.894 } 00:17:13.894 } 00:17:13.894 ]' 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.894 12:31:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.460 12:31:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:14.460 12:31:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.891 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.150 12:31:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.409 00:17:16.409 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.409 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.409 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.986 { 00:17:16.986 "cntlid": 101, 00:17:16.986 "qid": 0, 00:17:16.986 "state": "enabled", 00:17:16.986 "thread": "nvmf_tgt_poll_group_000", 00:17:16.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:16.986 "listen_address": { 00:17:16.986 "trtype": "RDMA", 00:17:16.986 "adrfam": "IPv4", 00:17:16.986 "traddr": "192.168.100.8", 00:17:16.986 "trsvcid": "4420" 00:17:16.986 }, 00:17:16.986 "peer_address": { 00:17:16.986 "trtype": "RDMA", 00:17:16.986 "adrfam": "IPv4", 00:17:16.986 "traddr": "192.168.100.8", 00:17:16.986 "trsvcid": "39478" 00:17:16.986 }, 00:17:16.986 "auth": { 00:17:16.986 "state": "completed", 00:17:16.986 "digest": "sha512", 00:17:16.986 "dhgroup": "null" 00:17:16.986 } 00:17:16.986 } 00:17:16.986 ]' 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.986 12:31:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.553 12:31:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:17.553 12:31:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:18.932 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.191 12:31:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.450 00:17:19.450 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.450 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.450 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.018 { 00:17:20.018 "cntlid": 103, 00:17:20.018 "qid": 0, 00:17:20.018 "state": "enabled", 00:17:20.018 "thread": "nvmf_tgt_poll_group_000", 00:17:20.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:20.018 "listen_address": { 00:17:20.018 "trtype": "RDMA", 00:17:20.018 "adrfam": "IPv4", 00:17:20.018 "traddr": "192.168.100.8", 00:17:20.018 "trsvcid": "4420" 00:17:20.018 }, 00:17:20.018 "peer_address": { 00:17:20.018 "trtype": "RDMA", 00:17:20.018 "adrfam": "IPv4", 00:17:20.018 "traddr": "192.168.100.8", 00:17:20.018 "trsvcid": "59052" 00:17:20.018 }, 00:17:20.018 "auth": { 00:17:20.018 "state": "completed", 00:17:20.018 "digest": "sha512", 00:17:20.018 "dhgroup": "null" 00:17:20.018 } 00:17:20.018 } 00:17:20.018 ]' 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.018 12:31:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.608 12:31:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:20.608 12:31:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:21.553 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.811 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.069 12:31:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.633 00:17:22.633 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.633 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.634 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.891 { 00:17:22.891 "cntlid": 105, 00:17:22.891 "qid": 0, 00:17:22.891 "state": "enabled", 00:17:22.891 "thread": "nvmf_tgt_poll_group_000", 00:17:22.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:22.891 "listen_address": { 00:17:22.891 "trtype": "RDMA", 00:17:22.891 "adrfam": "IPv4", 00:17:22.891 "traddr": "192.168.100.8", 00:17:22.891 "trsvcid": "4420" 00:17:22.891 }, 00:17:22.891 "peer_address": { 00:17:22.891 "trtype": "RDMA", 00:17:22.891 "adrfam": "IPv4", 00:17:22.891 "traddr": "192.168.100.8", 00:17:22.891 "trsvcid": "46380" 00:17:22.891 }, 00:17:22.891 "auth": { 00:17:22.891 "state": "completed", 00:17:22.891 "digest": "sha512", 00:17:22.891 "dhgroup": "ffdhe2048" 00:17:22.891 } 00:17:22.891 } 00:17:22.891 ]' 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.891 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.148 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.148 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.148 12:31:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.405 12:31:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:23.405 12:31:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.775 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.033 12:31:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.599 00:17:25.599 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.599 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.599 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.857 { 00:17:25.857 "cntlid": 107, 00:17:25.857 "qid": 0, 00:17:25.857 "state": "enabled", 00:17:25.857 "thread": "nvmf_tgt_poll_group_000", 00:17:25.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:25.857 "listen_address": { 00:17:25.857 "trtype": "RDMA", 00:17:25.857 "adrfam": "IPv4", 00:17:25.857 "traddr": "192.168.100.8", 00:17:25.857 "trsvcid": "4420" 00:17:25.857 }, 00:17:25.857 "peer_address": { 00:17:25.857 "trtype": "RDMA", 00:17:25.857 "adrfam": "IPv4", 00:17:25.857 "traddr": "192.168.100.8", 00:17:25.857 "trsvcid": "52552" 00:17:25.857 }, 00:17:25.857 "auth": { 00:17:25.857 "state": "completed", 00:17:25.857 "digest": "sha512", 00:17:25.857 "dhgroup": "ffdhe2048" 00:17:25.857 } 00:17:25.857 } 00:17:25.857 ]' 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.857 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.114 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.115 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.115 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.115 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.115 12:31:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.373 12:31:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:26.373 12:31:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.748 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.315 12:31:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.573 00:17:28.573 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.573 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.573 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.832 { 00:17:28.832 "cntlid": 109, 00:17:28.832 "qid": 0, 00:17:28.832 "state": "enabled", 00:17:28.832 "thread": "nvmf_tgt_poll_group_000", 00:17:28.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:28.832 "listen_address": { 00:17:28.832 "trtype": "RDMA", 00:17:28.832 "adrfam": "IPv4", 00:17:28.832 "traddr": "192.168.100.8", 00:17:28.832 "trsvcid": "4420" 00:17:28.832 }, 00:17:28.832 "peer_address": { 00:17:28.832 "trtype": "RDMA", 00:17:28.832 "adrfam": "IPv4", 00:17:28.832 "traddr": "192.168.100.8", 00:17:28.832 "trsvcid": "40380" 00:17:28.832 }, 00:17:28.832 "auth": { 00:17:28.832 "state": "completed", 00:17:28.832 "digest": "sha512", 00:17:28.832 "dhgroup": "ffdhe2048" 00:17:28.832 } 00:17:28.832 } 00:17:28.832 ]' 00:17:28.832 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.091 12:31:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.349 12:31:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:29.349 12:31:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.724 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.291 12:31:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.550 00:17:31.550 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.550 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.550 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.809 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.809 { 00:17:31.809 "cntlid": 111, 00:17:31.809 "qid": 0, 00:17:31.809 "state": "enabled", 00:17:31.809 "thread": "nvmf_tgt_poll_group_000", 00:17:31.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:31.809 "listen_address": { 00:17:31.809 "trtype": "RDMA", 00:17:31.809 "adrfam": "IPv4", 00:17:31.809 "traddr": "192.168.100.8", 00:17:31.809 "trsvcid": "4420" 00:17:31.809 }, 00:17:31.809 "peer_address": { 00:17:31.809 "trtype": "RDMA", 00:17:31.809 "adrfam": "IPv4", 00:17:31.809 "traddr": "192.168.100.8", 00:17:31.809 "trsvcid": "59048" 00:17:31.809 }, 00:17:31.809 "auth": { 00:17:31.809 "state": "completed", 00:17:31.809 "digest": "sha512", 00:17:31.809 "dhgroup": "ffdhe2048" 00:17:31.809 } 00:17:31.809 } 00:17:31.809 ]' 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.067 12:31:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.326 12:31:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:32.326 12:31:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:33.702 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.960 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.218 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.219 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.219 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.219 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.219 12:31:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.786 00:17:34.786 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.786 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.786 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.044 { 00:17:35.044 "cntlid": 113, 00:17:35.044 "qid": 0, 00:17:35.044 "state": "enabled", 00:17:35.044 "thread": "nvmf_tgt_poll_group_000", 00:17:35.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:35.044 "listen_address": { 00:17:35.044 "trtype": "RDMA", 00:17:35.044 "adrfam": "IPv4", 00:17:35.044 "traddr": "192.168.100.8", 00:17:35.044 "trsvcid": "4420" 00:17:35.044 }, 00:17:35.044 "peer_address": { 00:17:35.044 "trtype": "RDMA", 00:17:35.044 "adrfam": "IPv4", 00:17:35.044 "traddr": "192.168.100.8", 00:17:35.044 "trsvcid": "35108" 00:17:35.044 }, 00:17:35.044 "auth": { 00:17:35.044 "state": "completed", 00:17:35.044 "digest": "sha512", 00:17:35.044 "dhgroup": "ffdhe3072" 00:17:35.044 } 00:17:35.044 } 00:17:35.044 ]' 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.044 12:31:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.611 12:31:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:35.611 12:31:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.986 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.244 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:37.244 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.244 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.245 12:31:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.812 00:17:37.812 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.812 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.812 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.071 { 00:17:38.071 "cntlid": 115, 00:17:38.071 "qid": 0, 00:17:38.071 "state": "enabled", 00:17:38.071 "thread": "nvmf_tgt_poll_group_000", 00:17:38.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:38.071 "listen_address": { 00:17:38.071 "trtype": "RDMA", 00:17:38.071 "adrfam": "IPv4", 00:17:38.071 "traddr": "192.168.100.8", 00:17:38.071 "trsvcid": "4420" 00:17:38.071 }, 00:17:38.071 "peer_address": { 00:17:38.071 "trtype": "RDMA", 00:17:38.071 "adrfam": "IPv4", 00:17:38.071 "traddr": "192.168.100.8", 00:17:38.071 "trsvcid": "43142" 00:17:38.071 }, 00:17:38.071 "auth": { 00:17:38.071 "state": "completed", 00:17:38.071 "digest": "sha512", 00:17:38.071 "dhgroup": "ffdhe3072" 00:17:38.071 } 00:17:38.071 } 00:17:38.071 ]' 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.071 12:31:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.637 12:31:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:38.637 12:31:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.013 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.272 12:31:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.840 00:17:40.840 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.840 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.840 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.099 { 00:17:41.099 "cntlid": 117, 00:17:41.099 "qid": 0, 00:17:41.099 "state": "enabled", 00:17:41.099 "thread": "nvmf_tgt_poll_group_000", 00:17:41.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:41.099 "listen_address": { 00:17:41.099 "trtype": "RDMA", 00:17:41.099 "adrfam": "IPv4", 00:17:41.099 "traddr": "192.168.100.8", 00:17:41.099 "trsvcid": "4420" 00:17:41.099 }, 00:17:41.099 "peer_address": { 00:17:41.099 "trtype": "RDMA", 00:17:41.099 "adrfam": "IPv4", 00:17:41.099 "traddr": "192.168.100.8", 00:17:41.099 "trsvcid": "50550" 00:17:41.099 }, 00:17:41.099 "auth": { 00:17:41.099 "state": "completed", 00:17:41.099 "digest": "sha512", 00:17:41.099 "dhgroup": "ffdhe3072" 00:17:41.099 } 00:17:41.099 } 00:17:41.099 ]' 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.099 12:31:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.783 12:31:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:41.783 12:31:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:42.719 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.978 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.236 12:31:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.803 00:17:43.803 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.803 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.803 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.062 { 00:17:44.062 "cntlid": 119, 00:17:44.062 "qid": 0, 00:17:44.062 "state": "enabled", 00:17:44.062 "thread": "nvmf_tgt_poll_group_000", 00:17:44.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:44.062 "listen_address": { 00:17:44.062 "trtype": "RDMA", 00:17:44.062 "adrfam": "IPv4", 00:17:44.062 "traddr": "192.168.100.8", 00:17:44.062 "trsvcid": "4420" 00:17:44.062 }, 00:17:44.062 "peer_address": { 00:17:44.062 "trtype": "RDMA", 00:17:44.062 "adrfam": "IPv4", 00:17:44.062 "traddr": "192.168.100.8", 00:17:44.062 "trsvcid": "43939" 00:17:44.062 }, 00:17:44.062 "auth": { 00:17:44.062 "state": "completed", 00:17:44.062 "digest": "sha512", 00:17:44.062 "dhgroup": "ffdhe3072" 00:17:44.062 } 00:17:44.062 } 00:17:44.062 ]' 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.062 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.320 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.320 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.320 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.320 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.320 12:31:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.578 12:31:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:44.578 12:31:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.953 12:31:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.521 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.522 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.522 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.522 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.522 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.522 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.782 00:17:46.782 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.782 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.782 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.348 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.348 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.348 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.348 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.348 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.349 { 00:17:47.349 "cntlid": 121, 00:17:47.349 "qid": 0, 00:17:47.349 "state": "enabled", 00:17:47.349 "thread": "nvmf_tgt_poll_group_000", 00:17:47.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:47.349 "listen_address": { 00:17:47.349 "trtype": "RDMA", 00:17:47.349 "adrfam": "IPv4", 00:17:47.349 "traddr": "192.168.100.8", 00:17:47.349 "trsvcid": "4420" 00:17:47.349 }, 00:17:47.349 "peer_address": { 00:17:47.349 "trtype": "RDMA", 00:17:47.349 "adrfam": "IPv4", 00:17:47.349 "traddr": "192.168.100.8", 00:17:47.349 "trsvcid": "51762" 00:17:47.349 }, 00:17:47.349 "auth": { 00:17:47.349 "state": "completed", 00:17:47.349 "digest": "sha512", 00:17:47.349 "dhgroup": "ffdhe4096" 00:17:47.349 } 00:17:47.349 } 00:17:47.349 ]' 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.349 12:31:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.349 12:31:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.349 12:31:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.349 12:31:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.915 12:31:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:47.915 12:31:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.289 12:31:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.547 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.114 00:17:50.114 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.114 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.114 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.372 { 00:17:50.372 "cntlid": 123, 00:17:50.372 "qid": 0, 00:17:50.372 "state": "enabled", 00:17:50.372 "thread": "nvmf_tgt_poll_group_000", 00:17:50.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:50.372 "listen_address": { 00:17:50.372 "trtype": "RDMA", 00:17:50.372 "adrfam": "IPv4", 00:17:50.372 "traddr": "192.168.100.8", 00:17:50.372 "trsvcid": "4420" 00:17:50.372 }, 00:17:50.372 "peer_address": { 00:17:50.372 "trtype": "RDMA", 00:17:50.372 "adrfam": "IPv4", 00:17:50.372 "traddr": "192.168.100.8", 00:17:50.372 "trsvcid": "44442" 00:17:50.372 }, 00:17:50.372 "auth": { 00:17:50.372 "state": "completed", 00:17:50.372 "digest": "sha512", 00:17:50.372 "dhgroup": "ffdhe4096" 00:17:50.372 } 00:17:50.372 } 00:17:50.372 ]' 00:17:50.372 12:31:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.372 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.938 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:50.938 12:31:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.312 12:31:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.570 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.137 00:17:53.137 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.137 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.137 12:31:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.396 { 00:17:53.396 "cntlid": 125, 00:17:53.396 "qid": 0, 00:17:53.396 "state": "enabled", 00:17:53.396 "thread": "nvmf_tgt_poll_group_000", 00:17:53.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:53.396 "listen_address": { 00:17:53.396 "trtype": "RDMA", 00:17:53.396 "adrfam": "IPv4", 00:17:53.396 "traddr": "192.168.100.8", 00:17:53.396 "trsvcid": "4420" 00:17:53.396 }, 00:17:53.396 "peer_address": { 00:17:53.396 "trtype": "RDMA", 00:17:53.396 "adrfam": "IPv4", 00:17:53.396 "traddr": "192.168.100.8", 00:17:53.396 "trsvcid": "36142" 00:17:53.396 }, 00:17:53.396 "auth": { 00:17:53.396 "state": "completed", 00:17:53.396 "digest": "sha512", 00:17:53.396 "dhgroup": "ffdhe4096" 00:17:53.396 } 00:17:53.396 } 00:17:53.396 ]' 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.396 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.653 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.653 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.653 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.653 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.653 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.911 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:53.911 12:31:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:17:55.285 12:32:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.285 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.851 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.109 00:17:56.109 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.109 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.109 12:32:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.675 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.675 { 00:17:56.675 "cntlid": 127, 00:17:56.675 "qid": 0, 00:17:56.675 "state": "enabled", 00:17:56.675 "thread": "nvmf_tgt_poll_group_000", 00:17:56.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:17:56.676 "listen_address": { 00:17:56.676 "trtype": "RDMA", 00:17:56.676 "adrfam": "IPv4", 00:17:56.676 "traddr": "192.168.100.8", 00:17:56.676 "trsvcid": "4420" 00:17:56.676 }, 00:17:56.676 "peer_address": { 00:17:56.676 "trtype": "RDMA", 00:17:56.676 "adrfam": "IPv4", 00:17:56.676 "traddr": "192.168.100.8", 00:17:56.676 "trsvcid": "51938" 00:17:56.676 }, 00:17:56.676 "auth": { 00:17:56.676 "state": "completed", 00:17:56.676 "digest": "sha512", 00:17:56.676 "dhgroup": "ffdhe4096" 00:17:56.676 } 00:17:56.676 } 00:17:56.676 ]' 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.676 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.243 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:57.243 12:32:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.618 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.877 12:32:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.443 00:17:59.443 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.443 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.443 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.009 { 00:18:00.009 "cntlid": 129, 00:18:00.009 "qid": 0, 00:18:00.009 "state": "enabled", 00:18:00.009 "thread": "nvmf_tgt_poll_group_000", 00:18:00.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:00.009 "listen_address": { 00:18:00.009 "trtype": "RDMA", 00:18:00.009 "adrfam": "IPv4", 00:18:00.009 "traddr": "192.168.100.8", 00:18:00.009 "trsvcid": "4420" 00:18:00.009 }, 00:18:00.009 "peer_address": { 00:18:00.009 "trtype": "RDMA", 00:18:00.009 "adrfam": "IPv4", 00:18:00.009 "traddr": "192.168.100.8", 00:18:00.009 "trsvcid": "38186" 00:18:00.009 }, 00:18:00.009 "auth": { 00:18:00.009 "state": "completed", 00:18:00.009 "digest": "sha512", 00:18:00.009 "dhgroup": "ffdhe6144" 00:18:00.009 } 00:18:00.009 } 00:18:00.009 ]' 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.009 12:32:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.268 12:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:00.268 12:32:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:01.643 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.901 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.160 12:32:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.727 00:18:02.727 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.727 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.727 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.293 { 00:18:03.293 "cntlid": 131, 00:18:03.293 "qid": 0, 00:18:03.293 "state": "enabled", 00:18:03.293 "thread": "nvmf_tgt_poll_group_000", 00:18:03.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:03.293 "listen_address": { 00:18:03.293 "trtype": "RDMA", 00:18:03.293 "adrfam": "IPv4", 00:18:03.293 "traddr": "192.168.100.8", 00:18:03.293 "trsvcid": "4420" 00:18:03.293 }, 00:18:03.293 "peer_address": { 00:18:03.293 "trtype": "RDMA", 00:18:03.293 "adrfam": "IPv4", 00:18:03.293 "traddr": "192.168.100.8", 00:18:03.293 "trsvcid": "49300" 00:18:03.293 }, 00:18:03.293 "auth": { 00:18:03.293 "state": "completed", 00:18:03.293 "digest": "sha512", 00:18:03.293 "dhgroup": "ffdhe6144" 00:18:03.293 } 00:18:03.293 } 00:18:03.293 ]' 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.293 12:32:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.863 12:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:18:03.863 12:32:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.239 12:32:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.498 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.065 00:18:06.065 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.065 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.065 12:32:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.631 { 00:18:06.631 "cntlid": 133, 00:18:06.631 "qid": 0, 00:18:06.631 "state": "enabled", 00:18:06.631 "thread": "nvmf_tgt_poll_group_000", 00:18:06.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:06.631 "listen_address": { 00:18:06.631 "trtype": "RDMA", 00:18:06.631 "adrfam": "IPv4", 00:18:06.631 "traddr": "192.168.100.8", 00:18:06.631 "trsvcid": "4420" 00:18:06.631 }, 00:18:06.631 "peer_address": { 00:18:06.631 "trtype": "RDMA", 00:18:06.631 "adrfam": "IPv4", 00:18:06.631 "traddr": "192.168.100.8", 00:18:06.631 "trsvcid": "32968" 00:18:06.631 }, 00:18:06.631 "auth": { 00:18:06.631 "state": "completed", 00:18:06.631 "digest": "sha512", 00:18:06.631 "dhgroup": "ffdhe6144" 00:18:06.631 } 00:18:06.631 } 00:18:06.631 ]' 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.631 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.198 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:18:07.198 12:32:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:18:08.640 12:32:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.640 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.898 12:32:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.465 00:18:09.465 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.465 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.465 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.723 { 00:18:09.723 "cntlid": 135, 00:18:09.723 "qid": 0, 00:18:09.723 "state": "enabled", 00:18:09.723 "thread": "nvmf_tgt_poll_group_000", 00:18:09.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:09.723 "listen_address": { 00:18:09.723 "trtype": "RDMA", 00:18:09.723 "adrfam": "IPv4", 00:18:09.723 "traddr": "192.168.100.8", 00:18:09.723 "trsvcid": "4420" 00:18:09.723 }, 00:18:09.723 "peer_address": { 00:18:09.723 "trtype": "RDMA", 00:18:09.723 "adrfam": "IPv4", 00:18:09.723 "traddr": "192.168.100.8", 00:18:09.723 "trsvcid": "37194" 00:18:09.723 }, 00:18:09.723 "auth": { 00:18:09.723 "state": "completed", 00:18:09.723 "digest": "sha512", 00:18:09.723 "dhgroup": "ffdhe6144" 00:18:09.723 } 00:18:09.723 } 00:18:09.723 ]' 00:18:09.723 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.982 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.240 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:10.240 12:32:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:11.614 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.872 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.130 12:32:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.065 00:18:13.065 12:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.065 12:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.065 12:32:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.630 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.630 { 00:18:13.630 "cntlid": 137, 00:18:13.630 "qid": 0, 00:18:13.630 "state": "enabled", 00:18:13.630 "thread": "nvmf_tgt_poll_group_000", 00:18:13.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:13.630 "listen_address": { 00:18:13.630 "trtype": "RDMA", 00:18:13.630 "adrfam": "IPv4", 00:18:13.630 "traddr": "192.168.100.8", 00:18:13.630 "trsvcid": "4420" 00:18:13.630 }, 00:18:13.630 "peer_address": { 00:18:13.630 "trtype": "RDMA", 00:18:13.630 "adrfam": "IPv4", 00:18:13.630 "traddr": "192.168.100.8", 00:18:13.630 "trsvcid": "57234" 00:18:13.630 }, 00:18:13.630 "auth": { 00:18:13.630 "state": "completed", 00:18:13.630 "digest": "sha512", 00:18:13.630 "dhgroup": "ffdhe8192" 00:18:13.630 } 00:18:13.630 } 00:18:13.631 ]' 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.631 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.197 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:14.197 12:32:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:15.570 12:32:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.570 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.829 12:32:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.766 00:18:16.766 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.766 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.766 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.333 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.333 { 00:18:17.333 "cntlid": 139, 00:18:17.333 "qid": 0, 00:18:17.333 "state": "enabled", 00:18:17.333 "thread": "nvmf_tgt_poll_group_000", 00:18:17.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:17.333 "listen_address": { 00:18:17.333 "trtype": "RDMA", 00:18:17.333 "adrfam": "IPv4", 00:18:17.333 "traddr": "192.168.100.8", 00:18:17.333 "trsvcid": "4420" 00:18:17.333 }, 00:18:17.333 "peer_address": { 00:18:17.333 "trtype": "RDMA", 00:18:17.333 "adrfam": "IPv4", 00:18:17.333 "traddr": "192.168.100.8", 00:18:17.333 "trsvcid": "38865" 00:18:17.333 }, 00:18:17.333 "auth": { 00:18:17.333 "state": "completed", 00:18:17.333 "digest": "sha512", 00:18:17.333 "dhgroup": "ffdhe8192" 00:18:17.333 } 00:18:17.333 } 00:18:17.333 ]' 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.334 12:32:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.594 12:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:18:17.594 12:32:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: --dhchap-ctrl-secret DHHC-1:02:OGM5MmYzYWM4YzZkMDhjMTAyNWI0ZDFiNDU3ZTQxZGJiMWQ4MTkzOGMyNGM4NTY029zFdQ==: 00:18:18.969 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.969 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:18.969 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.969 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.227 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.227 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.227 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.227 12:32:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.485 12:32:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.417 00:18:20.417 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.417 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.417 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.983 { 00:18:20.983 "cntlid": 141, 00:18:20.983 "qid": 0, 00:18:20.983 "state": "enabled", 00:18:20.983 "thread": "nvmf_tgt_poll_group_000", 00:18:20.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:20.983 "listen_address": { 00:18:20.983 "trtype": "RDMA", 00:18:20.983 "adrfam": "IPv4", 00:18:20.983 "traddr": "192.168.100.8", 00:18:20.983 "trsvcid": "4420" 00:18:20.983 }, 00:18:20.983 "peer_address": { 00:18:20.983 "trtype": "RDMA", 00:18:20.983 "adrfam": "IPv4", 00:18:20.983 "traddr": "192.168.100.8", 00:18:20.983 "trsvcid": "54210" 00:18:20.983 }, 00:18:20.983 "auth": { 00:18:20.983 "state": "completed", 00:18:20.983 "digest": "sha512", 00:18:20.983 "dhgroup": "ffdhe8192" 00:18:20.983 } 00:18:20.983 } 00:18:20.983 ]' 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.983 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.241 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:18:21.241 12:32:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:01:YTU0YWE1NzdjMzllOTljMDlkMzUwNjEyZTM1ZmU0NzTspBta: 00:18:22.617 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.876 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.135 12:32:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.511 00:18:24.511 12:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.511 12:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.511 12:32:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.511 { 00:18:24.511 "cntlid": 143, 00:18:24.511 "qid": 0, 00:18:24.511 "state": "enabled", 00:18:24.511 "thread": "nvmf_tgt_poll_group_000", 00:18:24.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:24.511 "listen_address": { 00:18:24.511 "trtype": "RDMA", 00:18:24.511 "adrfam": "IPv4", 00:18:24.511 "traddr": "192.168.100.8", 00:18:24.511 "trsvcid": "4420" 00:18:24.511 }, 00:18:24.511 "peer_address": { 00:18:24.511 "trtype": "RDMA", 00:18:24.511 "adrfam": "IPv4", 00:18:24.511 "traddr": "192.168.100.8", 00:18:24.511 "trsvcid": "44768" 00:18:24.511 }, 00:18:24.511 "auth": { 00:18:24.511 "state": "completed", 00:18:24.511 "digest": "sha512", 00:18:24.511 "dhgroup": "ffdhe8192" 00:18:24.511 } 00:18:24.511 } 00:18:24.511 ]' 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.511 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.770 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.770 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.770 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.770 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.770 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.030 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:25.030 12:32:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:26.406 12:32:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.406 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.972 12:32:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.906 00:18:27.906 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.906 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.906 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.163 { 00:18:28.163 "cntlid": 145, 00:18:28.163 "qid": 0, 00:18:28.163 "state": "enabled", 00:18:28.163 "thread": "nvmf_tgt_poll_group_000", 00:18:28.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:28.163 "listen_address": { 00:18:28.163 "trtype": "RDMA", 00:18:28.163 "adrfam": "IPv4", 00:18:28.163 "traddr": "192.168.100.8", 00:18:28.163 "trsvcid": "4420" 00:18:28.163 }, 00:18:28.163 "peer_address": { 00:18:28.163 "trtype": "RDMA", 00:18:28.163 "adrfam": "IPv4", 00:18:28.163 "traddr": "192.168.100.8", 00:18:28.163 "trsvcid": "51635" 00:18:28.163 }, 00:18:28.163 "auth": { 00:18:28.163 "state": "completed", 00:18:28.163 "digest": "sha512", 00:18:28.163 "dhgroup": "ffdhe8192" 00:18:28.163 } 00:18:28.163 } 00:18:28.163 ]' 00:18:28.163 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.420 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.420 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.420 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.420 12:32:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.420 12:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.420 12:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.420 12:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.677 12:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:28.677 12:32:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:00:ZmM5MzcwNzY3ODUxYjk2ZmMxN2Y1MzRkYTdkMmRlZTE3ZWQwM2E2Mjg4YTk3MzBhHxDBxw==: --dhchap-ctrl-secret DHHC-1:03:MmQ2MjNlNWQ4MTFkYTMxMmRiNDdiZDY2MDBlZTQwMjI3YjI2Y2ViOGE1M2Y0Y2QyY2Y1OGQ2Y2NjNTU5MjBhYToEWCo=: 00:18:30.050 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:30.309 12:32:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:31.244 request: 00:18:31.244 { 00:18:31.244 "name": "nvme0", 00:18:31.244 "trtype": "rdma", 00:18:31.244 "traddr": "192.168.100.8", 00:18:31.244 "adrfam": "ipv4", 00:18:31.244 "trsvcid": "4420", 00:18:31.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:31.244 "prchk_reftag": false, 00:18:31.244 "prchk_guard": false, 00:18:31.244 "hdgst": false, 00:18:31.244 "ddgst": false, 00:18:31.244 "dhchap_key": "key2", 00:18:31.244 "allow_unrecognized_csi": false, 00:18:31.244 "method": "bdev_nvme_attach_controller", 00:18:31.244 "req_id": 1 00:18:31.244 } 00:18:31.244 Got JSON-RPC error response 00:18:31.244 response: 00:18:31.244 { 00:18:31.244 "code": -5, 00:18:31.244 "message": "Input/output error" 00:18:31.244 } 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.244 12:32:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.620 request: 00:18:32.620 { 00:18:32.620 "name": "nvme0", 00:18:32.620 "trtype": "rdma", 00:18:32.620 "traddr": "192.168.100.8", 00:18:32.620 "adrfam": "ipv4", 00:18:32.620 "trsvcid": "4420", 00:18:32.620 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:32.620 "prchk_reftag": false, 00:18:32.620 "prchk_guard": false, 00:18:32.620 "hdgst": false, 00:18:32.620 "ddgst": false, 00:18:32.620 "dhchap_key": "key1", 00:18:32.620 "dhchap_ctrlr_key": "ckey2", 00:18:32.620 "allow_unrecognized_csi": false, 00:18:32.620 "method": "bdev_nvme_attach_controller", 00:18:32.620 "req_id": 1 00:18:32.620 } 00:18:32.620 Got JSON-RPC error response 00:18:32.620 response: 00:18:32.620 { 00:18:32.620 "code": -5, 00:18:32.620 "message": "Input/output error" 00:18:32.620 } 00:18:32.620 12:32:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.620 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.621 12:32:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.558 request: 00:18:33.558 { 00:18:33.558 "name": "nvme0", 00:18:33.558 "trtype": "rdma", 00:18:33.558 "traddr": "192.168.100.8", 00:18:33.558 "adrfam": "ipv4", 00:18:33.558 "trsvcid": "4420", 00:18:33.558 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:33.558 "prchk_reftag": false, 00:18:33.558 "prchk_guard": false, 00:18:33.558 "hdgst": false, 00:18:33.558 "ddgst": false, 00:18:33.558 "dhchap_key": "key1", 00:18:33.558 "dhchap_ctrlr_key": "ckey1", 00:18:33.558 "allow_unrecognized_csi": false, 00:18:33.558 "method": "bdev_nvme_attach_controller", 00:18:33.558 "req_id": 1 00:18:33.558 } 00:18:33.558 Got JSON-RPC error response 00:18:33.558 response: 00:18:33.558 { 00:18:33.558 "code": -5, 00:18:33.558 "message": "Input/output error" 00:18:33.558 } 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2764678 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2764678 ']' 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2764678 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2764678 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2764678' 00:18:33.558 killing process with pid 2764678 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2764678 00:18:33.558 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2764678 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2787019 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2787019 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2787019 ']' 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.820 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2787019 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2787019 ']' 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.078 12:32:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.337 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.337 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:34.337 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:34.337 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.337 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 null0 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MTC 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4Dx ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Dx 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qxw 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Zr0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Zr0 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2Pj 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XZs ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XZs 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oGb 00:18:34.639 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.640 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.926 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.926 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.926 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.926 12:32:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.829 nvme0n1 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.829 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.830 { 00:18:36.830 "cntlid": 1, 00:18:36.830 "qid": 0, 00:18:36.830 "state": "enabled", 00:18:36.830 "thread": "nvmf_tgt_poll_group_000", 00:18:36.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:36.830 "listen_address": { 00:18:36.830 "trtype": "RDMA", 00:18:36.830 "adrfam": "IPv4", 00:18:36.830 "traddr": "192.168.100.8", 00:18:36.830 "trsvcid": "4420" 00:18:36.830 }, 00:18:36.830 "peer_address": { 00:18:36.830 "trtype": "RDMA", 00:18:36.830 "adrfam": "IPv4", 00:18:36.830 "traddr": "192.168.100.8", 00:18:36.830 "trsvcid": "46235" 00:18:36.830 }, 00:18:36.830 "auth": { 00:18:36.830 "state": "completed", 00:18:36.830 "digest": "sha512", 00:18:36.830 "dhgroup": "ffdhe8192" 00:18:36.830 } 00:18:36.830 } 00:18:36.830 ]' 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.830 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.087 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.088 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.088 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.346 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:37.346 12:32:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key3 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:38.720 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.286 12:32:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.545 request: 00:18:39.545 { 00:18:39.545 "name": "nvme0", 00:18:39.545 "trtype": "rdma", 00:18:39.545 "traddr": "192.168.100.8", 00:18:39.545 "adrfam": "ipv4", 00:18:39.545 "trsvcid": "4420", 00:18:39.545 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:39.545 "prchk_reftag": false, 00:18:39.545 "prchk_guard": false, 00:18:39.545 "hdgst": false, 00:18:39.545 "ddgst": false, 00:18:39.545 "dhchap_key": "key3", 00:18:39.545 "allow_unrecognized_csi": false, 00:18:39.545 "method": "bdev_nvme_attach_controller", 00:18:39.545 "req_id": 1 00:18:39.545 } 00:18:39.545 Got JSON-RPC error response 00:18:39.545 response: 00:18:39.545 { 00:18:39.545 "code": -5, 00:18:39.545 "message": "Input/output error" 00:18:39.545 } 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:39.545 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:39.803 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:39.803 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.803 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:39.803 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:39.803 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.804 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:39.804 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.804 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.804 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.804 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.370 request: 00:18:40.370 { 00:18:40.370 "name": "nvme0", 00:18:40.370 "trtype": "rdma", 00:18:40.370 "traddr": "192.168.100.8", 00:18:40.370 "adrfam": "ipv4", 00:18:40.370 "trsvcid": "4420", 00:18:40.370 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:40.370 "prchk_reftag": false, 00:18:40.370 "prchk_guard": false, 00:18:40.370 "hdgst": false, 00:18:40.370 "ddgst": false, 00:18:40.370 "dhchap_key": "key3", 00:18:40.370 "allow_unrecognized_csi": false, 00:18:40.370 "method": "bdev_nvme_attach_controller", 00:18:40.370 "req_id": 1 00:18:40.370 } 00:18:40.370 Got JSON-RPC error response 00:18:40.370 response: 00:18:40.370 { 00:18:40.370 "code": -5, 00:18:40.370 "message": "Input/output error" 00:18:40.370 } 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:40.370 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:40.371 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:40.371 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.371 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.371 12:32:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.629 12:32:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.565 request: 00:18:41.565 { 00:18:41.565 "name": "nvme0", 00:18:41.565 "trtype": "rdma", 00:18:41.565 "traddr": "192.168.100.8", 00:18:41.565 "adrfam": "ipv4", 00:18:41.565 "trsvcid": "4420", 00:18:41.565 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:41.565 "prchk_reftag": false, 00:18:41.565 "prchk_guard": false, 00:18:41.565 "hdgst": false, 00:18:41.565 "ddgst": false, 00:18:41.565 "dhchap_key": "key0", 00:18:41.565 "dhchap_ctrlr_key": "key1", 00:18:41.565 "allow_unrecognized_csi": false, 00:18:41.565 "method": "bdev_nvme_attach_controller", 00:18:41.565 "req_id": 1 00:18:41.565 } 00:18:41.565 Got JSON-RPC error response 00:18:41.565 response: 00:18:41.565 { 00:18:41.565 "code": -5, 00:18:41.565 "message": "Input/output error" 00:18:41.565 } 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:41.565 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:41.824 nvme0n1 00:18:41.824 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:41.824 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:41.824 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.083 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.083 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.083 12:32:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:42.651 12:32:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.552 nvme0n1 00:18:44.552 12:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:44.552 12:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:44.552 12:32:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:44.552 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.120 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.120 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:45.120 12:32:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid f19ece52-b769-e111-bd1d-001e673d80ae -l 0 --dhchap-secret DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: --dhchap-ctrl-secret DHHC-1:03:OWNjZWZlODM2ZDZhYzVhNjM1ZmZjNDQ2NzA0MWViYzVhYWU4MWU4OGRlNTQ0MTZkNTc2YTZmZWM1ZGEyNThkMpRWRZE=: 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.495 12:32:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.495 12:32:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:47.870 request: 00:18:47.870 { 00:18:47.870 "name": "nvme0", 00:18:47.870 "trtype": "rdma", 00:18:47.870 "traddr": "192.168.100.8", 00:18:47.870 "adrfam": "ipv4", 00:18:47.870 "trsvcid": "4420", 00:18:47.870 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae", 00:18:47.870 "prchk_reftag": false, 00:18:47.870 "prchk_guard": false, 00:18:47.870 "hdgst": false, 00:18:47.870 "ddgst": false, 00:18:47.870 "dhchap_key": "key1", 00:18:47.870 "allow_unrecognized_csi": false, 00:18:47.870 "method": "bdev_nvme_attach_controller", 00:18:47.870 "req_id": 1 00:18:47.870 } 00:18:47.870 Got JSON-RPC error response 00:18:47.870 response: 00:18:47.870 { 00:18:47.870 "code": -5, 00:18:47.870 "message": "Input/output error" 00:18:47.870 } 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.870 12:32:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.774 nvme0n1 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.774 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:50.032 12:32:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:50.598 nvme0n1 00:18:50.598 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:50.598 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:50.598 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.857 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.857 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.858 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: '' 2s 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: ]] 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGIyZjE3N2E2NDk4Y2FjNTE4MmIzN2Q1YTkyZGRlMzSo5mSQ: 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:51.424 12:32:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:53.324 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: 2s 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: ]] 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmY3NWU2MTdhNjhmMDFlNGIxZWNmNzI0NWY5ZTczYWI1NDNjMzIyNjFjMWVkMTk5+/zyZg==: 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:53.325 12:32:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:55.227 12:33:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.487 12:33:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:57.388 nvme0n1 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.388 12:33:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.325 12:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:58.325 12:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:58.325 12:33:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:58.583 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:59.149 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:59.149 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:59.149 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:59.407 12:33:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:00.343 request: 00:19:00.343 { 00:19:00.343 "name": "nvme0", 00:19:00.343 "dhchap_key": "key1", 00:19:00.343 "dhchap_ctrlr_key": "key3", 00:19:00.343 "method": "bdev_nvme_set_keys", 00:19:00.343 "req_id": 1 00:19:00.343 } 00:19:00.343 Got JSON-RPC error response 00:19:00.343 response: 00:19:00.343 { 00:19:00.343 "code": -13, 00:19:00.343 "message": "Permission denied" 00:19:00.343 } 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:00.343 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.910 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:00.910 12:33:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:01.844 12:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:01.844 12:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:01.844 12:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.102 12:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:02.102 12:33:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:03.036 12:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:03.036 12:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:03.036 12:33:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:03.609 12:33:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:05.547 nvme0n1 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:05.547 12:33:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:06.481 request: 00:19:06.481 { 00:19:06.481 "name": "nvme0", 00:19:06.481 "dhchap_key": "key2", 00:19:06.481 "dhchap_ctrlr_key": "key0", 00:19:06.481 "method": "bdev_nvme_set_keys", 00:19:06.481 "req_id": 1 00:19:06.481 } 00:19:06.481 Got JSON-RPC error response 00:19:06.481 response: 00:19:06.481 { 00:19:06.481 "code": -13, 00:19:06.481 "message": "Permission denied" 00:19:06.481 } 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:06.481 12:33:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.739 12:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:06.739 12:33:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:07.673 12:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:07.673 12:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:07.673 12:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.932 12:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:07.932 12:33:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:09.305 12:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:09.305 12:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:09.305 12:33:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2764703 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2764703 ']' 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2764703 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.305 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2764703 00:19:09.563 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.563 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.563 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2764703' 00:19:09.563 killing process with pid 2764703 00:19:09.563 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2764703 00:19:09.563 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2764703 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:09.821 rmmod nvme_rdma 00:19:09.821 rmmod nvme_fabrics 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2787019 ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2787019 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2787019 ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2787019 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2787019 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2787019' 00:19:09.821 killing process with pid 2787019 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2787019 00:19:09.821 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2787019 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MTC /tmp/spdk.key-sha256.qxw /tmp/spdk.key-sha384.2Pj /tmp/spdk.key-sha512.oGb /tmp/spdk.key-sha512.4Dx /tmp/spdk.key-sha384.Zr0 /tmp/spdk.key-sha256.XZs '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:19:10.079 00:19:10.079 real 4m39.654s 00:19:10.079 user 10m59.659s 00:19:10.079 sys 0m23.666s 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.079 ************************************ 00:19:10.079 END TEST nvmf_auth_target 00:19:10.079 ************************************ 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:10.079 ************************************ 00:19:10.079 START TEST nvmf_srq_overwhelm 00:19:10.079 ************************************ 00:19:10.079 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:10.079 * Looking for test storage... 00:19:10.079 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:10.080 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.080 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.080 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.339 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.340 --rc genhtml_branch_coverage=1 00:19:10.340 --rc genhtml_function_coverage=1 00:19:10.340 --rc genhtml_legend=1 00:19:10.340 --rc geninfo_all_blocks=1 00:19:10.340 --rc geninfo_unexecuted_blocks=1 00:19:10.340 00:19:10.340 ' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.340 --rc genhtml_branch_coverage=1 00:19:10.340 --rc genhtml_function_coverage=1 00:19:10.340 --rc genhtml_legend=1 00:19:10.340 --rc geninfo_all_blocks=1 00:19:10.340 --rc geninfo_unexecuted_blocks=1 00:19:10.340 00:19:10.340 ' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.340 --rc genhtml_branch_coverage=1 00:19:10.340 --rc genhtml_function_coverage=1 00:19:10.340 --rc genhtml_legend=1 00:19:10.340 --rc geninfo_all_blocks=1 00:19:10.340 --rc geninfo_unexecuted_blocks=1 00:19:10.340 00:19:10.340 ' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.340 --rc genhtml_branch_coverage=1 00:19:10.340 --rc genhtml_function_coverage=1 00:19:10.340 --rc genhtml_legend=1 00:19:10.340 --rc geninfo_all_blocks=1 00:19:10.340 --rc geninfo_unexecuted_blocks=1 00:19:10.340 00:19:10.340 ' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.340 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:19:10.340 12:33:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.879 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:19:12.880 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:19:12.880 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:19:12.880 Found net devices under 0000:83:00.0: mlx_0_0 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:19:12.880 Found net devices under 0000:83:00.1: mlx_0_1 00:19:12.880 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:12.881 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.881 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:19:12.881 altname enp131s0f0np0 00:19:12.881 inet 192.168.100.8/24 scope global mlx_0_0 00:19:12.881 valid_lft forever preferred_lft forever 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:12.881 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.881 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:19:12.881 altname enp131s0f1np1 00:19:12.881 inet 192.168.100.9/24 scope global mlx_0_1 00:19:12.881 valid_lft forever preferred_lft forever 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:12.881 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:12.882 192.168.100.9' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:12.882 192.168.100.9' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:12.882 192.168.100.9' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=2792066 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.882 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 2792066 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 2792066 ']' 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.883 [2024-11-20 12:33:18.320878] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:12.883 [2024-11-20 12:33:18.320986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.883 [2024-11-20 12:33:18.395276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.883 [2024-11-20 12:33:18.459452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.883 [2024-11-20 12:33:18.459526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.883 [2024-11-20 12:33:18.459543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.883 [2024-11-20 12:33:18.459556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.883 [2024-11-20 12:33:18.459567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.883 [2024-11-20 12:33:18.460842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.883 [2024-11-20 12:33:18.460896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.883 [2024-11-20 12:33:18.460951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.883 [2024-11-20 12:33:18.460955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.883 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.142 [2024-11-20 12:33:18.660259] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa6cdf0/0xa712e0) succeed. 00:19:13.142 [2024-11-20 12:33:18.675922] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa6e480/0xab2980) succeed. 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.142 Malloc0 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.142 [2024-11-20 12:33:18.788936] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.142 12:33:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.075 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.333 Malloc1 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.333 12:33:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.266 Malloc2 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.266 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.267 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:15.267 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.267 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.267 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.267 12:33:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:19:16.201 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:19:16.201 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:16.201 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:16.201 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.463 12:33:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:16.463 Malloc3 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.463 12:33:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 Malloc4 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.395 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.396 12:33:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:19:18.329 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:19:18.329 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:18.329 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:18.329 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:19:18.329 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 Malloc5 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.588 12:33:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:19:19.522 12:33:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:19:19.522 [global] 00:19:19.522 thread=1 00:19:19.522 invalidate=1 00:19:19.522 rw=read 00:19:19.522 time_based=1 00:19:19.522 runtime=10 00:19:19.522 ioengine=libaio 00:19:19.522 direct=1 00:19:19.522 bs=1048576 00:19:19.522 iodepth=128 00:19:19.522 norandommap=1 00:19:19.522 numjobs=13 00:19:19.522 00:19:19.522 [job0] 00:19:19.522 filename=/dev/nvme0n1 00:19:19.522 [job1] 00:19:19.522 filename=/dev/nvme1n1 00:19:19.522 [job2] 00:19:19.522 filename=/dev/nvme2n1 00:19:19.522 [job3] 00:19:19.522 filename=/dev/nvme3n1 00:19:19.522 [job4] 00:19:19.522 filename=/dev/nvme4n1 00:19:19.522 [job5] 00:19:19.522 filename=/dev/nvme5n1 00:19:19.522 Could not set queue depth (nvme0n1) 00:19:19.522 Could not set queue depth (nvme1n1) 00:19:19.522 Could not set queue depth (nvme2n1) 00:19:19.522 Could not set queue depth (nvme3n1) 00:19:19.522 Could not set queue depth (nvme4n1) 00:19:19.522 Could not set queue depth (nvme5n1) 00:19:19.779 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:19.779 ... 00:19:19.779 fio-3.35 00:19:19.779 Starting 78 threads 00:19:34.650 00:19:34.650 job0: (groupid=0, jobs=1): err= 0: pid=2792834: Wed Nov 20 12:33:38 2024 00:19:34.650 read: IOPS=2, BW=2485KiB/s (2544kB/s)(30.0MiB/12363msec) 00:19:34.650 slat (usec): min=437, max=6424.5k, avg=411153.41, stdev=1391834.73 00:19:34.650 clat (msec): min=27, max=12361, avg=10565.95, stdev=3386.82 00:19:34.650 lat (msec): min=4231, max=12362, avg=10977.10, stdev=2752.72 00:19:34.650 clat percentiles (msec): 00:19:34.650 | 1.00th=[ 28], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[10671], 00:19:34.650 | 30.00th=[10671], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:19:34.650 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:19:34.650 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.650 | 99.99th=[12416] 00:19:34.650 lat (msec) : 50=3.33%, >=2000=96.67% 00:19:34.650 cpu : usr=0.00%, sys=0.14%, ctx=30, majf=0, minf=7681 00:19:34.650 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:19:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.650 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:34.650 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.650 job0: (groupid=0, jobs=1): err= 0: pid=2792835: Wed Nov 20 12:33:38 2024 00:19:34.650 read: IOPS=65, BW=65.4MiB/s (68.6MB/s)(810MiB/12381msec) 00:19:34.650 slat (usec): min=50, max=2156.9k, avg=12689.40, stdev=135750.60 00:19:34.650 clat (msec): min=202, max=12370, avg=1679.98, stdev=2773.25 00:19:34.650 lat (msec): min=204, max=12371, avg=1692.67, stdev=2790.63 00:19:34.650 clat percentiles (msec): 00:19:34.650 | 1.00th=[ 205], 5.00th=[ 207], 10.00th=[ 207], 20.00th=[ 211], 00:19:34.650 | 30.00th=[ 213], 40.00th=[ 279], 50.00th=[ 376], 60.00th=[ 418], 00:19:34.650 | 70.00th=[ 510], 80.00th=[ 2534], 90.00th=[ 6342], 95.00th=[ 8658], 00:19:34.650 | 99.00th=[12281], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.650 | 99.99th=[12416] 00:19:34.650 bw ( KiB/s): min= 2035, max=540672, per=12.04%, avg=174846.38, stdev=188458.84, samples=8 00:19:34.650 iops : min= 1, max= 528, avg=170.62, stdev=184.17, samples=8 00:19:34.650 lat (msec) : 250=36.42%, 500=32.59%, 750=6.54%, >=2000=24.44% 00:19:34.650 cpu : usr=0.03%, sys=0.92%, ctx=686, majf=0, minf=32769 00:19:34.650 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:19:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.650 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.650 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.650 job0: (groupid=0, jobs=1): err= 0: pid=2792836: Wed Nov 20 12:33:38 2024 00:19:34.650 read: IOPS=13, BW=13.7MiB/s (14.4MB/s)(170MiB/12376msec) 00:19:34.650 slat (usec): min=68, max=5323.5k, avg=60485.54, stdev=465194.64 00:19:34.650 clat (msec): min=383, max=12028, avg=8972.66, stdev=4189.67 00:19:34.650 lat (msec): min=403, max=12029, avg=9033.15, stdev=4159.13 00:19:34.650 clat percentiles (msec): 00:19:34.650 | 1.00th=[ 401], 5.00th=[ 460], 10.00th=[ 535], 20.00th=[ 5604], 00:19:34.650 | 30.00th=[ 6477], 40.00th=[11745], 50.00th=[11879], 60.00th=[11879], 00:19:34.650 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:19:34.650 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:34.650 | 99.99th=[12013] 00:19:34.650 bw ( KiB/s): min= 1410, max=47104, per=1.20%, avg=17485.20, stdev=21135.44, samples=5 00:19:34.650 iops : min= 1, max= 46, avg=17.00, stdev=20.71, samples=5 00:19:34.650 lat (msec) : 500=8.24%, 750=5.29%, >=2000=86.47% 00:19:34.650 cpu : usr=0.01%, sys=0.63%, ctx=128, majf=0, minf=32769 00:19:34.650 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.4%, 32=18.8%, >=64=62.9% 00:19:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.650 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:19:34.650 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.650 job0: (groupid=0, jobs=1): err= 0: pid=2792837: Wed Nov 20 12:33:38 2024 00:19:34.650 read: IOPS=1, BW=1242KiB/s (1272kB/s)(15.0MiB/12365msec) 00:19:34.650 slat (usec): min=512, max=6366.6k, avg=822435.05, stdev=1915075.89 00:19:34.650 clat (msec): min=27, max=10719, avg=8004.59, stdev=3076.70 00:19:34.650 lat (msec): min=6394, max=12364, avg=8827.02, stdev=2356.58 00:19:34.650 clat percentiles (msec): 00:19:34.650 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 6409], 20.00th=[ 6409], 00:19:34.650 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[10671], 00:19:34.650 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:19:34.650 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:34.650 | 99.99th=[10671] 00:19:34.650 lat (msec) : 50=6.67%, >=2000=93.33% 00:19:34.650 cpu : usr=0.00%, sys=0.07%, ctx=19, majf=0, minf=3841 00:19:34.650 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.650 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.650 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792838: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=40, BW=40.2MiB/s (42.2MB/s)(499MiB/12406msec) 00:19:34.651 slat (usec): min=49, max=6362.2k, avg=24802.32, stdev=312568.12 00:19:34.651 clat (msec): min=26, max=12339, avg=3089.77, stdev=4125.89 00:19:34.651 lat (msec): min=131, max=12346, avg=3114.57, stdev=4135.20 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 132], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 138], 00:19:34.651 | 30.00th=[ 279], 40.00th=[ 309], 50.00th=[ 1368], 60.00th=[ 1452], 00:19:34.651 | 70.00th=[ 1838], 80.00th=[10134], 90.00th=[10268], 95.00th=[10402], 00:19:34.651 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.651 | 99.99th=[12281] 00:19:34.651 bw ( KiB/s): min= 4096, max=247808, per=7.47%, avg=108544.00, stdev=110566.71, samples=7 00:19:34.651 iops : min= 4, max= 242, avg=106.00, stdev=107.98, samples=7 00:19:34.651 lat (msec) : 50=0.20%, 250=27.86%, 500=18.04%, 2000=26.65%, >=2000=27.25% 00:19:34.651 cpu : usr=0.02%, sys=0.89%, ctx=382, majf=0, minf=32769 00:19:34.651 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:34.651 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792839: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=2, BW=2986KiB/s (3058kB/s)(36.0MiB/12346msec) 00:19:34.651 slat (usec): min=501, max=4183.1k, avg=342086.35, stdev=1034578.72 00:19:34.651 clat (msec): min=30, max=12345, avg=8747.70, stdev=2888.29 00:19:34.651 lat (msec): min=4213, max=12345, avg=9089.79, stdev=2533.87 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 31], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:34.651 | 30.00th=[ 6409], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:19:34.651 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[12281], 00:19:34.651 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.651 | 99.99th=[12281] 00:19:34.651 lat (msec) : 50=2.78%, >=2000=97.22% 00:19:34.651 cpu : usr=0.01%, sys=0.20%, ctx=30, majf=0, minf=9217 00:19:34.651 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.651 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792840: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=77, BW=77.7MiB/s (81.5MB/s)(961MiB/12367msec) 00:19:34.651 slat (usec): min=59, max=2119.0k, avg=10657.02, stdev=135685.08 00:19:34.651 clat (msec): min=95, max=11167, avg=1591.99, stdev=3557.93 00:19:34.651 lat (msec): min=96, max=11169, avg=1602.65, stdev=3570.82 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 103], 5.00th=[ 104], 10.00th=[ 105], 20.00th=[ 109], 00:19:34.651 | 30.00th=[ 114], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 115], 00:19:34.651 | 70.00th=[ 184], 80.00th=[ 334], 90.00th=[11073], 95.00th=[11073], 00:19:34.651 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:19:34.651 | 99.99th=[11208] 00:19:34.651 bw ( KiB/s): min= 1424, max=997376, per=14.70%, avg=213424.38, stdev=389736.82, samples=8 00:19:34.651 iops : min= 1, max= 974, avg=208.25, stdev=380.71, samples=8 00:19:34.651 lat (msec) : 100=0.21%, 250=74.30%, 500=10.09%, 750=0.10%, >=2000=15.30% 00:19:34.651 cpu : usr=0.06%, sys=1.06%, ctx=759, majf=0, minf=32769 00:19:34.651 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.651 issued rwts: total=961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792841: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(202MiB/12341msec) 00:19:34.651 slat (usec): min=62, max=2132.9k, avg=50752.00, stdev=294647.26 00:19:34.651 clat (msec): min=1635, max=12008, avg=7428.94, stdev=3285.44 00:19:34.651 lat (msec): min=1637, max=12008, avg=7479.70, stdev=3277.37 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 1636], 5.00th=[ 1687], 10.00th=[ 3608], 20.00th=[ 4212], 00:19:34.651 | 30.00th=[ 5604], 40.00th=[ 6477], 50.00th=[ 7752], 60.00th=[ 8557], 00:19:34.651 | 70.00th=[10537], 80.00th=[10671], 90.00th=[12013], 95.00th=[12013], 00:19:34.651 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:34.651 | 99.99th=[12013] 00:19:34.651 bw ( KiB/s): min= 1440, max=38834, per=1.75%, avg=25473.33, stdev=16268.48, samples=6 00:19:34.651 iops : min= 1, max= 37, avg=24.50, stdev=15.73, samples=6 00:19:34.651 lat (msec) : 2000=8.42%, >=2000=91.58% 00:19:34.651 cpu : usr=0.02%, sys=0.58%, ctx=90, majf=0, minf=32769 00:19:34.651 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:19:34.651 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792842: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=7, BW=7332KiB/s (7508kB/s)(89.0MiB/12430msec) 00:19:34.651 slat (usec): min=428, max=5791.0k, avg=116157.94, stdev=676992.71 00:19:34.651 clat (msec): min=2090, max=12427, avg=7389.42, stdev=2594.89 00:19:34.651 lat (msec): min=4234, max=12429, avg=7505.58, stdev=2586.38 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 2089], 5.00th=[ 6074], 10.00th=[ 6141], 20.00th=[ 6141], 00:19:34.651 | 30.00th=[ 6208], 40.00th=[ 6208], 50.00th=[ 6342], 60.00th=[ 6342], 00:19:34.651 | 70.00th=[ 6409], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:34.651 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.651 | 99.99th=[12416] 00:19:34.651 lat (msec) : >=2000=100.00% 00:19:34.651 cpu : usr=0.00%, sys=0.45%, ctx=54, majf=0, minf=22785 00:19:34.651 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.651 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792843: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=62, BW=62.5MiB/s (65.5MB/s)(779MiB/12463msec) 00:19:34.651 slat (usec): min=49, max=6574.6k, avg=13296.00, stdev=241907.93 00:19:34.651 clat (msec): min=217, max=12388, avg=1830.09, stdev=3150.24 00:19:34.651 lat (msec): min=219, max=12416, avg=1843.39, stdev=3165.54 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 218], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 222], 00:19:34.651 | 30.00th=[ 224], 40.00th=[ 284], 50.00th=[ 384], 60.00th=[ 435], 00:19:34.651 | 70.00th=[ 514], 80.00th=[ 2165], 90.00th=[ 8792], 95.00th=[ 8792], 00:19:34.651 | 99.00th=[ 8926], 99.50th=[10671], 99.90th=[12416], 99.95th=[12416], 00:19:34.651 | 99.99th=[12416] 00:19:34.651 bw ( KiB/s): min= 2048, max=520192, per=18.39%, avg=267059.20, stdev=193070.03, samples=5 00:19:34.651 iops : min= 2, max= 508, avg=260.80, stdev=188.54, samples=5 00:19:34.651 lat (msec) : 250=35.69%, 500=32.61%, 750=9.63%, >=2000=22.08% 00:19:34.651 cpu : usr=0.02%, sys=1.28%, ctx=644, majf=0, minf=32769 00:19:34.651 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:34.651 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792844: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=17, BW=17.9MiB/s (18.8MB/s)(223MiB/12438msec) 00:19:34.651 slat (usec): min=61, max=6389.8k, avg=55648.18, stdev=474199.78 00:19:34.651 clat (msec): min=27, max=12380, avg=6858.21, stdev=5598.62 00:19:34.651 lat (msec): min=218, max=12419, avg=6913.86, stdev=5587.54 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 218], 5.00th=[ 222], 10.00th=[ 239], 20.00th=[ 255], 00:19:34.651 | 30.00th=[ 384], 40.00th=[ 1217], 50.00th=[11745], 60.00th=[11745], 00:19:34.651 | 70.00th=[11745], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:19:34.651 | 99.00th=[11879], 99.50th=[11879], 99.90th=[12416], 99.95th=[12416], 00:19:34.651 | 99.99th=[12416] 00:19:34.651 bw ( KiB/s): min= 2048, max=180224, per=3.35%, avg=48640.00, stdev=87759.85, samples=4 00:19:34.651 iops : min= 2, max= 176, avg=47.50, stdev=85.70, samples=4 00:19:34.651 lat (msec) : 50=0.45%, 250=14.80%, 500=18.83%, 750=5.83%, 2000=1.79% 00:19:34.651 lat (msec) : >=2000=58.30% 00:19:34.651 cpu : usr=0.02%, sys=0.66%, ctx=190, majf=0, minf=32769 00:19:34.651 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:19:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.651 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:19:34.651 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.651 job0: (groupid=0, jobs=1): err= 0: pid=2792845: Wed Nov 20 12:33:38 2024 00:19:34.651 read: IOPS=14, BW=14.4MiB/s (15.1MB/s)(179MiB/12454msec) 00:19:34.651 slat (usec): min=65, max=2174.4k, avg=57904.14, stdev=314128.30 00:19:34.651 clat (msec): min=492, max=12378, avg=5173.51, stdev=2661.95 00:19:34.651 lat (msec): min=493, max=12378, avg=5231.41, stdev=2693.78 00:19:34.651 clat percentiles (msec): 00:19:34.651 | 1.00th=[ 493], 5.00th=[ 498], 10.00th=[ 3775], 20.00th=[ 3809], 00:19:34.651 | 30.00th=[ 3842], 40.00th=[ 3910], 50.00th=[ 3977], 60.00th=[ 4111], 00:19:34.651 | 70.00th=[ 4279], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:34.651 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.651 | 99.99th=[12416] 00:19:34.651 bw ( KiB/s): min= 1851, max=104448, per=3.66%, avg=53149.50, stdev=72547.03, samples=2 00:19:34.651 iops : min= 1, max= 102, avg=51.50, stdev=71.42, samples=2 00:19:34.652 lat (msec) : 500=5.03%, 750=0.56%, >=2000=94.41% 00:19:34.652 cpu : usr=0.01%, sys=0.67%, ctx=133, majf=0, minf=32769 00:19:34.652 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=8.9%, 32=17.9%, >=64=64.8% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:19:34.652 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job0: (groupid=0, jobs=1): err= 0: pid=2792846: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=2, BW=2478KiB/s (2538kB/s)(30.0MiB/12396msec) 00:19:34.652 slat (usec): min=466, max=4291.3k, avg=343217.34, stdev=1107885.07 00:19:34.652 clat (msec): min=2099, max=12350, avg=10087.12, stdev=2444.68 00:19:34.652 lat (msec): min=6390, max=12395, avg=10430.34, stdev=1959.09 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[ 6409], 20.00th=[ 6409], 00:19:34.652 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:19:34.652 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:34.652 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.652 | 99.99th=[12416] 00:19:34.652 lat (msec) : >=2000=100.00% 00:19:34.652 cpu : usr=0.00%, sys=0.16%, ctx=18, majf=0, minf=7681 00:19:34.652 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:34.652 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792847: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=31, BW=31.8MiB/s (33.3MB/s)(392MiB/12329msec) 00:19:34.652 slat (usec): min=60, max=2100.6k, avg=25980.06, stdev=207098.86 00:19:34.652 clat (msec): min=129, max=8258, avg=2218.94, stdev=2314.62 00:19:34.652 lat (msec): min=130, max=8371, avg=2244.92, stdev=2343.07 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 130], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 00:19:34.652 | 30.00th=[ 142], 40.00th=[ 174], 50.00th=[ 351], 60.00th=[ 3742], 00:19:34.652 | 70.00th=[ 3775], 80.00th=[ 3842], 90.00th=[ 6409], 95.00th=[ 6611], 00:19:34.652 | 99.00th=[ 6678], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288], 00:19:34.652 | 99.99th=[ 8288] 00:19:34.652 bw ( KiB/s): min= 1410, max=339968, per=12.44%, avg=180694.00, stdev=170163.69, samples=3 00:19:34.652 iops : min= 1, max= 332, avg=176.33, stdev=166.37, samples=3 00:19:34.652 lat (msec) : 250=48.21%, 500=3.83%, 2000=0.26%, >=2000=47.70% 00:19:34.652 cpu : usr=0.02%, sys=0.89%, ctx=292, majf=0, minf=32769 00:19:34.652 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:34.652 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792848: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=4, BW=4954KiB/s (5073kB/s)(60.0MiB/12402msec) 00:19:34.652 slat (usec): min=529, max=2138.0k, avg=171792.28, stdev=552405.57 00:19:34.652 clat (msec): min=2093, max=12400, avg=10484.28, stdev=3120.66 00:19:34.652 lat (msec): min=4167, max=12401, avg=10656.07, stdev=2928.74 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:34.652 | 30.00th=[10671], 40.00th=[12416], 50.00th=[12416], 60.00th=[12416], 00:19:34.652 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.652 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.652 | 99.99th=[12416] 00:19:34.652 lat (msec) : >=2000=100.00% 00:19:34.652 cpu : usr=0.00%, sys=0.30%, ctx=95, majf=0, minf=15361 00:19:34.652 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.652 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792849: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=224, BW=224MiB/s (235MB/s)(2325MiB/10359msec) 00:19:34.652 slat (usec): min=46, max=2036.9k, avg=4408.67, stdev=74014.69 00:19:34.652 clat (msec): min=90, max=5959, avg=416.13, stdev=817.65 00:19:34.652 lat (msec): min=90, max=5976, avg=420.54, stdev=826.30 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 111], 00:19:34.652 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 112], 60.00th=[ 112], 00:19:34.652 | 70.00th=[ 113], 80.00th=[ 165], 90.00th=[ 1955], 95.00th=[ 2366], 00:19:34.652 | 99.00th=[ 4144], 99.50th=[ 4144], 99.90th=[ 4597], 99.95th=[ 4597], 00:19:34.652 | 99.99th=[ 5940] 00:19:34.652 bw ( KiB/s): min=34816, max=1175552, per=44.26%, avg=642779.43, stdev=511775.26, samples=7 00:19:34.652 iops : min= 34, max= 1148, avg=627.71, stdev=499.78, samples=7 00:19:34.652 lat (msec) : 100=1.46%, 250=83.66%, 500=2.19%, 2000=5.25%, >=2000=7.44% 00:19:34.652 cpu : usr=0.11%, sys=2.13%, ctx=1998, majf=0, minf=32769 00:19:34.652 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.652 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792850: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=39, BW=40.0MiB/s (41.9MB/s)(493MiB/12332msec) 00:19:34.652 slat (usec): min=48, max=2022.6k, avg=20688.15, stdev=181718.63 00:19:34.652 clat (msec): min=135, max=12290, avg=2778.21, stdev=3844.72 00:19:34.652 lat (msec): min=137, max=12298, avg=2798.90, stdev=3865.46 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 138], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 138], 00:19:34.652 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 1552], 00:19:34.652 | 70.00th=[ 4144], 80.00th=[ 6342], 90.00th=[10000], 95.00th=[10134], 00:19:34.652 | 99.00th=[12147], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.652 | 99.99th=[12281] 00:19:34.652 bw ( KiB/s): min= 1395, max=532480, per=8.60%, avg=124819.17, stdev=202319.90, samples=6 00:19:34.652 iops : min= 1, max= 520, avg=121.83, stdev=197.62, samples=6 00:19:34.652 lat (msec) : 250=56.80%, 500=3.04%, 2000=7.30%, >=2000=32.86% 00:19:34.652 cpu : usr=0.02%, sys=0.74%, ctx=404, majf=0, minf=32769 00:19:34.652 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:34.652 issued rwts: total=493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792851: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=6, BW=6706KiB/s (6867kB/s)(81.0MiB/12369msec) 00:19:34.652 slat (usec): min=466, max=2060.4k, avg=126395.93, stdev=466119.04 00:19:34.652 clat (msec): min=2129, max=12366, avg=10267.96, stdev=2537.93 00:19:34.652 lat (msec): min=4137, max=12368, avg=10394.36, stdev=2377.45 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 8490], 00:19:34.652 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12147], 00:19:34.652 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:34.652 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.652 | 99.99th=[12416] 00:19:34.652 lat (msec) : >=2000=100.00% 00:19:34.652 cpu : usr=0.00%, sys=0.34%, ctx=70, majf=0, minf=20737 00:19:34.652 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.652 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792852: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=26, BW=26.2MiB/s (27.5MB/s)(270MiB/10307msec) 00:19:34.652 slat (usec): min=47, max=2125.2k, avg=37806.80, stdev=253305.70 00:19:34.652 clat (msec): min=97, max=6470, avg=2569.95, stdev=1685.62 00:19:34.652 lat (msec): min=126, max=8596, avg=2607.76, stdev=1727.01 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 127], 5.00th=[ 144], 10.00th=[ 165], 20.00th=[ 1938], 00:19:34.652 | 30.00th=[ 2022], 40.00th=[ 2072], 50.00th=[ 2106], 60.00th=[ 2123], 00:19:34.652 | 70.00th=[ 4077], 80.00th=[ 4111], 90.00th=[ 4463], 95.00th=[ 6007], 00:19:34.652 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:19:34.652 | 99.99th=[ 6477] 00:19:34.652 bw ( KiB/s): min=137216, max=153600, per=10.01%, avg=145408.00, stdev=11585.24, samples=2 00:19:34.652 iops : min= 134, max= 150, avg=142.00, stdev=11.31, samples=2 00:19:34.652 lat (msec) : 100=0.37%, 250=17.78%, 2000=7.78%, >=2000=74.07% 00:19:34.652 cpu : usr=0.00%, sys=0.76%, ctx=196, majf=0, minf=32769 00:19:34.652 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.7% 00:19:34.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.652 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:34.652 issued rwts: total=270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.652 job1: (groupid=0, jobs=1): err= 0: pid=2792853: Wed Nov 20 12:33:38 2024 00:19:34.652 read: IOPS=3, BW=3406KiB/s (3488kB/s)(41.0MiB/12327msec) 00:19:34.652 slat (usec): min=437, max=3689.8k, avg=249343.56, stdev=774904.00 00:19:34.652 clat (msec): min=2102, max=12325, avg=8870.46, stdev=3376.73 00:19:34.652 lat (msec): min=4207, max=12326, avg=9119.81, stdev=3239.16 00:19:34.652 clat percentiles (msec): 00:19:34.652 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:34.652 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[12281], 00:19:34.652 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.652 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.652 | 99.99th=[12281] 00:19:34.652 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.19%, ctx=38, majf=0, minf=10497 00:19:34.653 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.653 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792854: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=2, BW=2567KiB/s (2629kB/s)(31.0MiB/12365msec) 00:19:34.653 slat (usec): min=458, max=3774.4k, avg=330210.56, stdev=881897.21 00:19:34.653 clat (msec): min=2128, max=12364, avg=8151.09, stdev=3984.98 00:19:34.653 lat (msec): min=4118, max=12364, avg=8481.30, stdev=3892.31 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2123], 5.00th=[ 4111], 10.00th=[ 4144], 20.00th=[ 4212], 00:19:34.653 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[12281], 00:19:34.653 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.653 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.653 | 99.99th=[12416] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.16%, ctx=43, majf=0, minf=7937 00:19:34.653 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:34.653 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792855: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=3, BW=3660KiB/s (3748kB/s)(44.0MiB/12310msec) 00:19:34.653 slat (usec): min=479, max=2149.3k, avg=231302.91, stdev=636444.30 00:19:34.653 clat (msec): min=2131, max=12302, avg=8424.56, stdev=3291.28 00:19:34.653 lat (msec): min=4202, max=12308, avg=8655.87, stdev=3194.94 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:34.653 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:19:34.653 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.653 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.653 | 99.99th=[12281] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.24%, ctx=38, majf=0, minf=11265 00:19:34.653 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.653 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792856: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=3, BW=3319KiB/s (3399kB/s)(40.0MiB/12341msec) 00:19:34.653 slat (usec): min=475, max=3702.6k, avg=255055.14, stdev=780551.28 00:19:34.653 clat (msec): min=2138, max=12339, avg=7291.46, stdev=3188.00 00:19:34.653 lat (msec): min=4194, max=12340, avg=7546.52, stdev=3173.24 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:34.653 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6409], 00:19:34.653 | 70.00th=[ 6409], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.653 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.653 | 99.99th=[12281] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.21%, ctx=35, majf=0, minf=10241 00:19:34.653 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.653 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792857: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=6, BW=6172KiB/s (6320kB/s)(75.0MiB/12443msec) 00:19:34.653 slat (usec): min=467, max=3688.2k, avg=137604.29, stdev=584137.61 00:19:34.653 clat (msec): min=2121, max=12441, avg=10676.00, stdev=3123.20 00:19:34.653 lat (msec): min=4206, max=12442, avg=10813.60, stdev=2964.54 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:34.653 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12416], 00:19:34.653 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.653 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.653 | 99.99th=[12416] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.33%, ctx=97, majf=0, minf=19201 00:19:34.653 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.653 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792858: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=3, BW=3147KiB/s (3222kB/s)(38.0MiB/12366msec) 00:19:34.653 slat (usec): min=454, max=2158.5k, avg=269531.85, stdev=679982.66 00:19:34.653 clat (msec): min=2123, max=12365, avg=7423.92, stdev=3351.37 00:19:34.653 lat (msec): min=4216, max=12365, avg=7693.46, stdev=3325.34 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:19:34.653 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6409], 00:19:34.653 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:34.653 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.653 | 99.99th=[12416] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.22%, ctx=42, majf=0, minf=9729 00:19:34.653 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.653 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job1: (groupid=0, jobs=1): err= 0: pid=2792859: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=7, BW=7428KiB/s (7606kB/s)(90.0MiB/12407msec) 00:19:34.653 slat (usec): min=432, max=3770.3k, avg=114358.93, stdev=534644.79 00:19:34.653 clat (msec): min=2113, max=12405, avg=10131.55, stdev=3030.08 00:19:34.653 lat (msec): min=4193, max=12406, avg=10245.90, stdev=2916.16 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 6409], 00:19:34.653 | 30.00th=[ 8423], 40.00th=[12147], 50.00th=[12416], 60.00th=[12416], 00:19:34.653 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.653 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.653 | 99.99th=[12416] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.39%, ctx=83, majf=0, minf=23041 00:19:34.653 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.9%, 16=17.8%, 32=35.6%, >=64=30.0% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.653 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job2: (groupid=0, jobs=1): err= 0: pid=2792860: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=6, BW=6716KiB/s (6877kB/s)(81.0MiB/12351msec) 00:19:34.653 slat (usec): min=449, max=2136.2k, avg=126605.32, stdev=467586.58 00:19:34.653 clat (msec): min=2094, max=12349, avg=10047.53, stdev=2631.72 00:19:34.653 lat (msec): min=4231, max=12350, avg=10174.13, stdev=2487.06 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2089], 5.00th=[ 6275], 10.00th=[ 6342], 20.00th=[ 8356], 00:19:34.653 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[12147], 60.00th=[12147], 00:19:34.653 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.653 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.653 | 99.99th=[12416] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.36%, ctx=75, majf=0, minf=20737 00:19:34.653 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.653 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job2: (groupid=0, jobs=1): err= 0: pid=2792861: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=6, BW=6494KiB/s (6650kB/s)(78.0MiB/12299msec) 00:19:34.653 slat (usec): min=471, max=2106.5k, avg=130623.57, stdev=475171.68 00:19:34.653 clat (msec): min=2109, max=12296, avg=10671.07, stdev=2522.38 00:19:34.653 lat (msec): min=4124, max=12298, avg=10801.70, stdev=2329.70 00:19:34.653 clat percentiles (msec): 00:19:34.653 | 1.00th=[ 2106], 5.00th=[ 4111], 10.00th=[ 6409], 20.00th=[ 8557], 00:19:34.653 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:34.653 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.653 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.653 | 99.99th=[12281] 00:19:34.653 lat (msec) : >=2000=100.00% 00:19:34.653 cpu : usr=0.00%, sys=0.36%, ctx=70, majf=0, minf=19969 00:19:34.653 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:19:34.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.653 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.653 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.653 job2: (groupid=0, jobs=1): err= 0: pid=2792862: Wed Nov 20 12:33:38 2024 00:19:34.653 read: IOPS=35, BW=36.0MiB/s (37.7MB/s)(446MiB/12401msec) 00:19:34.653 slat (usec): min=56, max=2176.9k, avg=23048.66, stdev=189402.57 00:19:34.654 clat (msec): min=354, max=12181, avg=3256.02, stdev=3658.99 00:19:34.654 lat (msec): min=358, max=12217, avg=3279.07, stdev=3677.03 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 359], 5.00th=[ 380], 10.00th=[ 397], 20.00th=[ 418], 00:19:34.654 | 30.00th=[ 451], 40.00th=[ 518], 50.00th=[ 1062], 60.00th=[ 2198], 00:19:34.654 | 70.00th=[ 4329], 80.00th=[ 8557], 90.00th=[ 9463], 95.00th=[ 9463], 00:19:34.654 | 99.00th=[10671], 99.50th=[10671], 99.90th=[12147], 99.95th=[12147], 00:19:34.654 | 99.99th=[12147] 00:19:34.654 bw ( KiB/s): min= 2039, max=313344, per=5.62%, avg=81662.88, stdev=114209.53, samples=8 00:19:34.654 iops : min= 1, max= 306, avg=79.62, stdev=111.63, samples=8 00:19:34.654 lat (msec) : 500=37.67%, 750=10.54%, 2000=3.59%, >=2000=48.21% 00:19:34.654 cpu : usr=0.00%, sys=0.77%, ctx=357, majf=0, minf=32769 00:19:34.654 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:34.654 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792863: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=3, BW=3164KiB/s (3240kB/s)(38.0MiB/12299msec) 00:19:34.654 slat (usec): min=419, max=2267.5k, avg=268553.69, stdev=672107.59 00:19:34.654 clat (msec): min=2092, max=12200, avg=8280.51, stdev=3524.48 00:19:34.654 lat (msec): min=4024, max=12298, avg=8549.07, stdev=3427.74 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 2089], 5.00th=[ 4010], 10.00th=[ 4077], 20.00th=[ 4077], 00:19:34.654 | 30.00th=[ 4111], 40.00th=[ 6342], 50.00th=[ 8490], 60.00th=[10537], 00:19:34.654 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:34.654 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:34.654 | 99.99th=[12147] 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.18%, ctx=51, majf=0, minf=9729 00:19:34.654 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.654 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792864: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=4, BW=4569KiB/s (4679kB/s)(55.0MiB/12326msec) 00:19:34.654 slat (usec): min=449, max=2066.8k, avg=185744.47, stdev=556359.97 00:19:34.654 clat (msec): min=2109, max=12316, avg=8669.28, stdev=3309.74 00:19:34.654 lat (msec): min=4174, max=12325, avg=8855.02, stdev=3220.25 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:19:34.654 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:19:34.654 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:19:34.654 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.654 | 99.99th=[12281] 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.28%, ctx=76, majf=0, minf=14081 00:19:34.654 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.654 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792865: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=5, BW=5294KiB/s (5421kB/s)(64.0MiB/12380msec) 00:19:34.654 slat (usec): min=457, max=2053.1k, avg=160239.27, stdev=525267.04 00:19:34.654 clat (msec): min=2123, max=12377, avg=9635.78, stdev=3092.40 00:19:34.654 lat (msec): min=4167, max=12379, avg=9796.02, stdev=2959.83 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6275], 20.00th=[ 6409], 00:19:34.654 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[10671], 60.00th=[12281], 00:19:34.654 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.654 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.654 | 99.99th=[12416] 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.32%, ctx=77, majf=0, minf=16385 00:19:34.654 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.654 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792866: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=5, BW=6029KiB/s (6174kB/s)(73.0MiB/12399msec) 00:19:34.654 slat (usec): min=460, max=2134.5k, avg=140755.89, stdev=498885.88 00:19:34.654 clat (msec): min=2123, max=12398, avg=10910.15, stdev=2704.33 00:19:34.654 lat (msec): min=4198, max=12398, avg=11050.91, stdev=2500.36 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[10671], 00:19:34.654 | 30.00th=[10671], 40.00th=[12281], 50.00th=[12281], 60.00th=[12416], 00:19:34.654 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.654 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.654 | 99.99th=[12416] 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.34%, ctx=123, majf=0, minf=18689 00:19:34.654 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.654 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792867: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=3, BW=3403KiB/s (3485kB/s)(41.0MiB/12336msec) 00:19:34.654 slat (usec): min=485, max=4227.0k, avg=249501.13, stdev=806166.25 00:19:34.654 clat (msec): min=2106, max=12335, avg=10047.83, stdev=2223.46 00:19:34.654 lat (msec): min=6333, max=12335, avg=10297.33, stdev=1853.14 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 8490], 00:19:34.654 | 30.00th=[10537], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:19:34.654 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.654 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.654 | 99.99th=[12281] 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.22%, ctx=41, majf=0, minf=10497 00:19:34.654 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.654 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792868: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=17, BW=17.9MiB/s (18.8MB/s)(221MiB/12337msec) 00:19:34.654 slat (usec): min=61, max=2122.6k, avg=46265.30, stdev=284456.34 00:19:34.654 clat (msec): min=132, max=12281, avg=6940.92, stdev=5069.15 00:19:34.654 lat (msec): min=135, max=12281, avg=6987.19, stdev=5069.77 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 136], 5.00th=[ 140], 10.00th=[ 159], 20.00th=[ 167], 00:19:34.654 | 30.00th=[ 1469], 40.00th=[ 5671], 50.00th=[ 8490], 60.00th=[10671], 00:19:34.654 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:19:34.654 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.654 | 99.99th=[12281] 00:19:34.654 bw ( KiB/s): min= 2048, max=102400, per=1.89%, avg=27501.71, stdev=36766.35, samples=7 00:19:34.654 iops : min= 2, max= 100, avg=26.86, stdev=35.90, samples=7 00:19:34.654 lat (msec) : 250=20.36%, 500=2.26%, 2000=10.86%, >=2000=66.52% 00:19:34.654 cpu : usr=0.00%, sys=0.62%, ctx=148, majf=0, minf=32770 00:19:34.654 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:34.654 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792869: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=10, BW=10.3MiB/s (10.8MB/s)(128MiB/12396msec) 00:19:34.654 slat (usec): min=475, max=2109.4k, avg=80320.60, stdev=375871.98 00:19:34.654 clat (msec): min=2113, max=12392, avg=9395.96, stdev=2954.83 00:19:34.654 lat (msec): min=4138, max=12393, avg=9476.29, stdev=2894.43 00:19:34.654 clat percentiles (msec): 00:19:34.654 | 1.00th=[ 4144], 5.00th=[ 6141], 10.00th=[ 6141], 20.00th=[ 6208], 00:19:34.654 | 30.00th=[ 6275], 40.00th=[ 6409], 50.00th=[10671], 60.00th=[12281], 00:19:34.654 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:19:34.654 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.654 | 99.99th=[12416] 00:19:34.654 bw ( KiB/s): min= 1838, max= 1838, per=0.13%, avg=1838.00, stdev= 0.00, samples=1 00:19:34.654 iops : min= 1, max= 1, avg= 1.00, stdev= 0.00, samples=1 00:19:34.654 lat (msec) : >=2000=100.00% 00:19:34.654 cpu : usr=0.00%, sys=0.65%, ctx=108, majf=0, minf=32769 00:19:34.654 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.5%, 32=25.0%, >=64=50.8% 00:19:34.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.654 complete : 0=0.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=50.0% 00:19:34.654 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.654 job2: (groupid=0, jobs=1): err= 0: pid=2792870: Wed Nov 20 12:33:38 2024 00:19:34.654 read: IOPS=2, BW=2898KiB/s (2968kB/s)(35.0MiB/12367msec) 00:19:34.654 slat (usec): min=450, max=2271.2k, avg=293072.97, stdev=694931.87 00:19:34.654 clat (msec): min=2108, max=12364, avg=10237.09, stdev=2313.25 00:19:34.654 lat (msec): min=4139, max=12366, avg=10530.16, stdev=1858.20 00:19:34.654 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 2106], 5.00th=[ 4144], 10.00th=[ 6409], 20.00th=[ 8658], 00:19:34.655 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:19:34.655 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12416], 95.00th=[12416], 00:19:34.655 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.655 | 99.99th=[12416] 00:19:34.655 lat (msec) : >=2000=100.00% 00:19:34.655 cpu : usr=0.01%, sys=0.14%, ctx=34, majf=0, minf=8961 00:19:34.655 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.655 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job2: (groupid=0, jobs=1): err= 0: pid=2792871: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=4, BW=5075KiB/s (5197kB/s)(61.0MiB/12307msec) 00:19:34.655 slat (usec): min=435, max=2041.6k, avg=166850.04, stdev=535360.11 00:19:34.655 clat (msec): min=2128, max=12306, avg=7532.37, stdev=3123.46 00:19:34.655 lat (msec): min=4162, max=12306, avg=7699.22, stdev=3101.76 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:19:34.655 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 8423], 60.00th=[ 8423], 00:19:34.655 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:19:34.655 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.655 | 99.99th=[12281] 00:19:34.655 lat (msec) : >=2000=100.00% 00:19:34.655 cpu : usr=0.00%, sys=0.32%, ctx=67, majf=0, minf=15617 00:19:34.655 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.655 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job2: (groupid=0, jobs=1): err= 0: pid=2792872: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=2, BW=2412KiB/s (2470kB/s)(29.0MiB/12313msec) 00:19:34.655 slat (usec): min=528, max=2075.4k, avg=351793.49, stdev=743417.75 00:19:34.655 clat (msec): min=2110, max=12311, avg=7689.81, stdev=3085.93 00:19:34.655 lat (msec): min=4165, max=12312, avg=8041.60, stdev=3007.67 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:19:34.655 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:19:34.655 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:19:34.655 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.655 | 99.99th=[12281] 00:19:34.655 lat (msec) : >=2000=100.00% 00:19:34.655 cpu : usr=0.00%, sys=0.15%, ctx=52, majf=0, minf=7425 00:19:34.655 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:34.655 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job3: (groupid=0, jobs=1): err= 0: pid=2792873: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=31, BW=31.8MiB/s (33.4MB/s)(393MiB/12355msec) 00:19:34.655 slat (usec): min=65, max=2201.9k, avg=26081.19, stdev=208260.86 00:19:34.655 clat (msec): min=174, max=10696, avg=3408.27, stdev=4332.02 00:19:34.655 lat (msec): min=176, max=12296, avg=3434.35, stdev=4350.69 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 224], 00:19:34.655 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 397], 00:19:34.655 | 70.00th=[ 6409], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:19:34.655 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:34.655 | 99.99th=[10671] 00:19:34.655 bw ( KiB/s): min= 1896, max=475136, per=5.36%, avg=77797.00, stdev=175472.98, samples=7 00:19:34.655 iops : min= 1, max= 464, avg=75.71, stdev=171.48, samples=7 00:19:34.655 lat (msec) : 250=25.95%, 500=35.11%, 2000=2.29%, >=2000=36.64% 00:19:34.655 cpu : usr=0.03%, sys=0.96%, ctx=302, majf=0, minf=32769 00:19:34.655 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:34.655 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job3: (groupid=0, jobs=1): err= 0: pid=2792874: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=8, BW=9148KiB/s (9368kB/s)(92.0MiB/10298msec) 00:19:34.655 slat (usec): min=433, max=2039.2k, avg=110558.46, stdev=426045.67 00:19:34.655 clat (msec): min=125, max=10222, avg=4451.77, stdev=1807.66 00:19:34.655 lat (msec): min=2096, max=10297, avg=4562.33, stdev=1850.70 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 126], 5.00th=[ 2265], 10.00th=[ 2265], 20.00th=[ 4044], 00:19:34.655 | 30.00th=[ 4111], 40.00th=[ 4144], 50.00th=[ 4178], 60.00th=[ 4279], 00:19:34.655 | 70.00th=[ 4329], 80.00th=[ 6409], 90.00th=[ 6544], 95.00th=[ 8658], 00:19:34.655 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.655 | 99.99th=[10268] 00:19:34.655 lat (msec) : 250=1.09%, >=2000=98.91% 00:19:34.655 cpu : usr=0.00%, sys=0.63%, ctx=81, majf=0, minf=23553 00:19:34.655 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.7%, 16=17.4%, 32=34.8%, >=64=31.5% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.655 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job3: (groupid=0, jobs=1): err= 0: pid=2792875: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=8, BW=8757KiB/s (8967kB/s)(105MiB/12278msec) 00:19:34.655 slat (usec): min=459, max=2047.2k, avg=96801.51, stdev=405904.67 00:19:34.655 clat (msec): min=2113, max=10706, avg=6737.80, stdev=1824.31 00:19:34.655 lat (msec): min=4096, max=12277, avg=6834.61, stdev=1846.11 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 4111], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6141], 00:19:34.655 | 30.00th=[ 6141], 40.00th=[ 6141], 50.00th=[ 6208], 60.00th=[ 6208], 00:19:34.655 | 70.00th=[ 6342], 80.00th=[ 8490], 90.00th=[10537], 95.00th=[10671], 00:19:34.655 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:34.655 | 99.99th=[10671] 00:19:34.655 lat (msec) : >=2000=100.00% 00:19:34.655 cpu : usr=0.01%, sys=0.45%, ctx=83, majf=0, minf=26881 00:19:34.655 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.655 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job3: (groupid=0, jobs=1): err= 0: pid=2792876: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=6, BW=6221KiB/s (6370kB/s)(75.0MiB/12345msec) 00:19:34.655 slat (usec): min=450, max=2130.8k, avg=136790.89, stdev=491129.32 00:19:34.655 clat (msec): min=2085, max=12342, avg=9522.84, stdev=3357.46 00:19:34.655 lat (msec): min=4144, max=12344, avg=9659.63, stdev=3257.87 00:19:34.655 clat percentiles (msec): 00:19:34.655 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4212], 20.00th=[ 6409], 00:19:34.655 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[12147], 60.00th=[12281], 00:19:34.655 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.655 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.655 | 99.99th=[12281] 00:19:34.655 lat (msec) : >=2000=100.00% 00:19:34.655 cpu : usr=0.00%, sys=0.37%, ctx=71, majf=0, minf=19201 00:19:34.655 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:19:34.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.655 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.655 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.655 job3: (groupid=0, jobs=1): err= 0: pid=2792877: Wed Nov 20 12:33:38 2024 00:19:34.655 read: IOPS=5, BW=5148KiB/s (5271kB/s)(62.0MiB/12333msec) 00:19:34.655 slat (usec): min=451, max=2041.9k, avg=165026.33, stdev=527547.91 00:19:34.655 clat (msec): min=2100, max=12330, avg=7720.34, stdev=3214.65 00:19:34.656 lat (msec): min=4123, max=12332, avg=7885.37, stdev=3183.90 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2106], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 4245], 00:19:34.656 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 8490], 00:19:34.656 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.29%, ctx=75, majf=0, minf=15873 00:19:34.656 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.656 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792878: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=3, BW=3578KiB/s (3664kB/s)(43.0MiB/12305msec) 00:19:34.656 slat (usec): min=440, max=2170.4k, avg=237679.55, stdev=638969.63 00:19:34.656 clat (msec): min=2084, max=12304, avg=9045.37, stdev=2627.77 00:19:34.656 lat (msec): min=4254, max=12304, avg=9283.05, stdev=2438.56 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2089], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 6409], 00:19:34.656 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:19:34.656 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.21%, ctx=61, majf=0, minf=11009 00:19:34.656 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.656 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792879: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=7, BW=8066KiB/s (8260kB/s)(97.0MiB/12314msec) 00:19:34.656 slat (usec): min=445, max=2106.0k, avg=105275.47, stdev=428040.36 00:19:34.656 clat (msec): min=2101, max=12309, avg=8677.73, stdev=2997.58 00:19:34.656 lat (msec): min=4134, max=12313, avg=8783.00, stdev=2943.03 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:19:34.656 | 30.00th=[ 6409], 40.00th=[ 8423], 50.00th=[ 8423], 60.00th=[10671], 00:19:34.656 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.39%, ctx=37, majf=0, minf=24833 00:19:34.656 IO depths : 1=1.0%, 2=2.1%, 4=4.1%, 8=8.2%, 16=16.5%, 32=33.0%, >=64=35.1% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.656 issued rwts: total=97,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792880: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=12, BW=12.4MiB/s (13.0MB/s)(153MiB/12347msec) 00:19:34.656 slat (usec): min=99, max=2058.3k, avg=66864.62, stdev=333184.85 00:19:34.656 clat (msec): min=2114, max=12248, avg=8366.62, stdev=2198.12 00:19:34.656 lat (msec): min=4055, max=12253, avg=8433.49, stdev=2160.68 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 4044], 5.00th=[ 4144], 10.00th=[ 5873], 20.00th=[ 6409], 00:19:34.656 | 30.00th=[ 8288], 40.00th=[ 8288], 50.00th=[ 8356], 60.00th=[ 8423], 00:19:34.656 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[12147], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 bw ( KiB/s): min= 1903, max=24576, per=0.91%, avg=13275.75, stdev=9293.99, samples=4 00:19:34.656 iops : min= 1, max= 24, avg=12.75, stdev= 9.43, samples=4 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.54%, ctx=97, majf=0, minf=32769 00:19:34.656 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.5%, 32=20.9%, >=64=58.8% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=96.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.7% 00:19:34.656 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792881: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=5, BW=5741KiB/s (5878kB/s)(69.0MiB/12308msec) 00:19:34.656 slat (usec): min=435, max=2159.4k, avg=147866.81, stdev=511144.91 00:19:34.656 clat (msec): min=2104, max=12286, avg=8604.28, stdev=3271.00 00:19:34.656 lat (msec): min=4263, max=12307, avg=8752.15, stdev=3202.74 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 4279], 00:19:34.656 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:19:34.656 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.01%, sys=0.33%, ctx=55, majf=0, minf=17665 00:19:34.656 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.656 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792882: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=7, BW=7797KiB/s (7984kB/s)(94.0MiB/12346msec) 00:19:34.656 slat (usec): min=428, max=2146.0k, avg=108733.74, stdev=438139.73 00:19:34.656 clat (msec): min=2123, max=12344, avg=9091.66, stdev=3071.21 00:19:34.656 lat (msec): min=4136, max=12344, avg=9200.40, stdev=3002.02 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2123], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 6275], 00:19:34.656 | 30.00th=[ 6275], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:19:34.656 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.01%, sys=0.44%, ctx=86, majf=0, minf=24065 00:19:34.656 IO depths : 1=1.1%, 2=2.1%, 4=4.3%, 8=8.5%, 16=17.0%, 32=34.0%, >=64=33.0% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.656 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792883: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=9, BW=9755KiB/s (9989kB/s)(118MiB/12387msec) 00:19:34.656 slat (usec): min=453, max=4201.1k, avg=86809.59, stdev=468351.28 00:19:34.656 clat (msec): min=2142, max=12385, avg=8136.83, stdev=3379.83 00:19:34.656 lat (msec): min=3993, max=12386, avg=8223.64, stdev=3356.02 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 4010], 5.00th=[ 4010], 10.00th=[ 4144], 20.00th=[ 4212], 00:19:34.656 | 30.00th=[ 6141], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[10671], 00:19:34.656 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:34.656 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.656 | 99.99th=[12416] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.60%, ctx=95, majf=0, minf=30209 00:19:34.656 IO depths : 1=0.8%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.6%, 32=27.1%, >=64=46.6% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.656 issued rwts: total=118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792884: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=2, BW=2916KiB/s (2986kB/s)(35.0MiB/12291msec) 00:19:34.656 slat (usec): min=457, max=2086.9k, avg=291030.25, stdev=688017.21 00:19:34.656 clat (msec): min=2104, max=12198, avg=7600.74, stdev=2997.25 00:19:34.656 lat (msec): min=4163, max=12290, avg=7891.77, stdev=2941.85 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:19:34.656 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 8490], 60.00th=[ 8557], 00:19:34.656 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[12147], 00:19:34.656 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:34.656 | 99.99th=[12147] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.20%, ctx=66, majf=0, minf=8961 00:19:34.656 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.656 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.656 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.656 job3: (groupid=0, jobs=1): err= 0: pid=2792885: Wed Nov 20 12:33:38 2024 00:19:34.656 read: IOPS=3, BW=3919KiB/s (4013kB/s)(47.0MiB/12280msec) 00:19:34.656 slat (usec): min=416, max=2055.2k, avg=216649.35, stdev=601931.16 00:19:34.656 clat (msec): min=2096, max=12221, avg=6821.86, stdev=2999.03 00:19:34.656 lat (msec): min=4151, max=12279, avg=7038.50, stdev=3017.97 00:19:34.656 clat percentiles (msec): 00:19:34.656 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 4212], 00:19:34.656 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 6342], 60.00th=[ 6409], 00:19:34.656 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[12281], 00:19:34.656 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.656 | 99.99th=[12281] 00:19:34.656 lat (msec) : >=2000=100.00% 00:19:34.656 cpu : usr=0.00%, sys=0.28%, ctx=56, majf=0, minf=12033 00:19:34.657 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.657 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792886: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=2, BW=2756KiB/s (2822kB/s)(33.0MiB/12262msec) 00:19:34.657 slat (usec): min=459, max=2202.7k, avg=307342.98, stdev=711685.31 00:19:34.657 clat (msec): min=2118, max=12199, avg=8261.36, stdev=2912.22 00:19:34.657 lat (msec): min=4120, max=12261, avg=8568.71, stdev=2775.71 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 2123], 5.00th=[ 4111], 10.00th=[ 4111], 20.00th=[ 6342], 00:19:34.657 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8423], 60.00th=[ 8490], 00:19:34.657 | 70.00th=[10537], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:34.657 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:34.657 | 99.99th=[12147] 00:19:34.657 lat (msec) : >=2000=100.00% 00:19:34.657 cpu : usr=0.00%, sys=0.16%, ctx=45, majf=0, minf=8449 00:19:34.657 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.657 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792887: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=21, BW=21.4MiB/s (22.4MB/s)(264MiB/12362msec) 00:19:34.657 slat (usec): min=65, max=6408.6k, avg=38808.33, stdev=416535.43 00:19:34.657 clat (msec): min=246, max=12317, avg=5844.44, stdev=5357.95 00:19:34.657 lat (msec): min=248, max=12319, avg=5883.25, stdev=5364.92 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 266], 20.00th=[ 296], 00:19:34.657 | 30.00th=[ 439], 40.00th=[ 456], 50.00th=[ 4245], 60.00th=[11610], 00:19:34.657 | 70.00th=[11745], 80.00th=[11745], 90.00th=[11879], 95.00th=[12281], 00:19:34.657 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.657 | 99.99th=[12281] 00:19:34.657 bw ( KiB/s): min= 1831, max=169984, per=3.86%, avg=56071.80, stdev=69139.37, samples=5 00:19:34.657 iops : min= 1, max= 166, avg=54.60, stdev=67.67, samples=5 00:19:34.657 lat (msec) : 250=1.14%, 500=42.80%, 2000=0.76%, >=2000=55.30% 00:19:34.657 cpu : usr=0.01%, sys=0.68%, ctx=202, majf=0, minf=32769 00:19:34.657 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.1%, >=64=76.1% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:34.657 issued rwts: total=264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792888: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(106MiB/10358msec) 00:19:34.657 slat (usec): min=428, max=2158.2k, avg=96529.07, stdev=416706.54 00:19:34.657 clat (msec): min=124, max=10356, avg=6567.08, stdev=3360.17 00:19:34.657 lat (msec): min=2118, max=10357, avg=6663.61, stdev=3320.08 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 2232], 20.00th=[ 2232], 00:19:34.657 | 30.00th=[ 4463], 40.00th=[ 4463], 50.00th=[ 6544], 60.00th=[ 8658], 00:19:34.657 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:19:34.657 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:19:34.657 | 99.99th=[10402] 00:19:34.657 lat (msec) : 250=0.94%, >=2000=99.06% 00:19:34.657 cpu : usr=0.00%, sys=0.65%, ctx=73, majf=0, minf=27137 00:19:34.657 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.657 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792889: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=17, BW=17.9MiB/s (18.8MB/s)(186MiB/10368msec) 00:19:34.657 slat (usec): min=69, max=2088.3k, avg=55020.18, stdev=302806.15 00:19:34.657 clat (msec): min=132, max=10186, avg=6222.24, stdev=2422.35 00:19:34.657 lat (msec): min=2053, max=10187, avg=6277.26, stdev=2397.69 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 2056], 5.00th=[ 2089], 10.00th=[ 2198], 20.00th=[ 3842], 00:19:34.657 | 30.00th=[ 4463], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8020], 00:19:34.657 | 70.00th=[ 8423], 80.00th=[ 8423], 90.00th=[ 8490], 95.00th=[ 8658], 00:19:34.657 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:34.657 | 99.99th=[10134] 00:19:34.657 bw ( KiB/s): min=24576, max=63488, per=2.73%, avg=39594.67, stdev=20919.03, samples=3 00:19:34.657 iops : min= 24, max= 62, avg=38.67, stdev=20.43, samples=3 00:19:34.657 lat (msec) : 250=0.54%, >=2000=99.46% 00:19:34.657 cpu : usr=0.00%, sys=0.69%, ctx=137, majf=0, minf=32769 00:19:34.657 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:19:34.657 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792890: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=3, BW=3083KiB/s (3157kB/s)(37.0MiB/12289msec) 00:19:34.657 slat (usec): min=493, max=4172.8k, avg=275259.68, stdev=842953.96 00:19:34.657 clat (msec): min=2103, max=12281, avg=6988.39, stdev=3232.95 00:19:34.657 lat (msec): min=4112, max=12288, avg=7263.65, stdev=3239.09 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 2106], 5.00th=[ 4111], 10.00th=[ 4245], 20.00th=[ 4279], 00:19:34.657 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 4279], 60.00th=[ 8490], 00:19:34.657 | 70.00th=[ 8490], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:19:34.657 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.657 | 99.99th=[12281] 00:19:34.657 lat (msec) : >=2000=100.00% 00:19:34.657 cpu : usr=0.00%, sys=0.19%, ctx=40, majf=0, minf=9473 00:19:34.657 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.657 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792891: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=6, BW=6175KiB/s (6323kB/s)(62.0MiB/10281msec) 00:19:34.657 slat (usec): min=429, max=4197.2k, avg=163795.59, stdev=650420.61 00:19:34.657 clat (msec): min=125, max=10274, avg=7844.34, stdev=1983.38 00:19:34.657 lat (msec): min=4322, max=10280, avg=8008.14, stdev=1739.84 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 126], 5.00th=[ 4329], 10.00th=[ 4396], 20.00th=[ 6544], 00:19:34.657 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8658], 00:19:34.657 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10134], 95.00th=[10268], 00:19:34.657 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.657 | 99.99th=[10268] 00:19:34.657 lat (msec) : 250=1.61%, >=2000=98.39% 00:19:34.657 cpu : usr=0.01%, sys=0.35%, ctx=45, majf=0, minf=15873 00:19:34.657 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.657 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792892: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=3, BW=3781KiB/s (3872kB/s)(38.0MiB/10291msec) 00:19:34.657 slat (usec): min=436, max=2274.2k, avg=267572.39, stdev=681551.47 00:19:34.657 clat (msec): min=122, max=10287, avg=7843.32, stdev=2197.35 00:19:34.657 lat (msec): min=2129, max=10290, avg=8110.89, stdev=1818.12 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 123], 5.00th=[ 2123], 10.00th=[ 6409], 20.00th=[ 6477], 00:19:34.657 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[ 8658], 00:19:34.657 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:19:34.657 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.657 | 99.99th=[10268] 00:19:34.657 lat (msec) : 250=2.63%, >=2000=97.37% 00:19:34.657 cpu : usr=0.00%, sys=0.20%, ctx=34, majf=0, minf=9729 00:19:34.657 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:19:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.657 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.657 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.657 job4: (groupid=0, jobs=1): err= 0: pid=2792893: Wed Nov 20 12:33:38 2024 00:19:34.657 read: IOPS=5, BW=5257KiB/s (5384kB/s)(53.0MiB/10323msec) 00:19:34.657 slat (usec): min=497, max=2169.8k, avg=191869.15, stdev=578163.29 00:19:34.657 clat (msec): min=152, max=10320, avg=5568.26, stdev=3206.46 00:19:34.657 lat (msec): min=2117, max=10322, avg=5760.13, stdev=3180.33 00:19:34.657 clat percentiles (msec): 00:19:34.657 | 1.00th=[ 153], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2123], 00:19:34.657 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 4396], 00:19:34.657 | 70.00th=[ 6544], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:19:34.657 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.657 | 99.99th=[10268] 00:19:34.657 lat (msec) : 250=1.89%, >=2000=98.11% 00:19:34.657 cpu : usr=0.00%, sys=0.34%, ctx=50, majf=0, minf=13569 00:19:34.658 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.658 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job4: (groupid=0, jobs=1): err= 0: pid=2792894: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=5, BW=5279KiB/s (5406kB/s)(53.0MiB/10280msec) 00:19:34.658 slat (usec): min=590, max=2119.7k, avg=191088.87, stdev=571115.22 00:19:34.658 clat (msec): min=151, max=10268, avg=5702.69, stdev=2636.10 00:19:34.658 lat (msec): min=2126, max=10279, avg=5893.78, stdev=2592.70 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 153], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2265], 00:19:34.658 | 30.00th=[ 4396], 40.00th=[ 6409], 50.00th=[ 6544], 60.00th=[ 6544], 00:19:34.658 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:34.658 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.658 | 99.99th=[10268] 00:19:34.658 lat (msec) : 250=1.89%, >=2000=98.11% 00:19:34.658 cpu : usr=0.00%, sys=0.36%, ctx=31, majf=0, minf=13569 00:19:34.658 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.658 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job4: (groupid=0, jobs=1): err= 0: pid=2792895: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=190, BW=191MiB/s (200MB/s)(1908MiB/10014msec) 00:19:34.658 slat (usec): min=44, max=2082.8k, avg=5237.55, stdev=89066.61 00:19:34.658 clat (msec): min=12, max=8373, avg=246.48, stdev=942.31 00:19:34.658 lat (msec): min=13, max=8374, avg=251.72, stdev=960.50 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 29], 5.00th=[ 96], 10.00th=[ 105], 20.00th=[ 113], 00:19:34.658 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 114], 60.00th=[ 114], 00:19:34.658 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 138], 95.00th=[ 169], 00:19:34.658 | 99.00th=[ 6812], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:34.658 | 99.99th=[ 8356] 00:19:34.658 bw ( KiB/s): min=264192, max=1144832, per=62.79%, avg=911872.00, stdev=432383.12, samples=4 00:19:34.658 iops : min= 258, max= 1118, avg=890.50, stdev=422.25, samples=4 00:19:34.658 lat (msec) : 20=0.47%, 50=1.78%, 100=3.09%, 250=91.56%, 500=1.05% 00:19:34.658 lat (msec) : >=2000=2.04% 00:19:34.658 cpu : usr=0.07%, sys=1.79%, ctx=1709, majf=0, minf=32769 00:19:34.658 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.658 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job4: (groupid=0, jobs=1): err= 0: pid=2792896: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=24, BW=24.7MiB/s (25.9MB/s)(254MiB/10269msec) 00:19:34.658 slat (usec): min=64, max=2080.9k, avg=39985.49, stdev=260740.14 00:19:34.658 clat (msec): min=110, max=6676, avg=1432.21, stdev=1359.85 00:19:34.658 lat (msec): min=133, max=8240, avg=1472.20, stdev=1432.75 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 133], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 140], 00:19:34.658 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 1905], 60.00th=[ 1938], 00:19:34.658 | 70.00th=[ 1955], 80.00th=[ 1989], 90.00th=[ 2299], 95.00th=[ 2400], 00:19:34.658 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:19:34.658 | 99.99th=[ 6678] 00:19:34.658 bw ( KiB/s): min=258048, max=258048, per=17.77%, avg=258048.00, stdev= 0.00, samples=1 00:19:34.658 iops : min= 252, max= 252, avg=252.00, stdev= 0.00, samples=1 00:19:34.658 lat (msec) : 250=40.16%, 500=0.39%, 2000=41.34%, >=2000=18.11% 00:19:34.658 cpu : usr=0.03%, sys=1.02%, ctx=184, majf=0, minf=32769 00:19:34.658 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:19:34.658 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job4: (groupid=0, jobs=1): err= 0: pid=2792897: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(188MiB/10328msec) 00:19:34.658 slat (usec): min=70, max=2149.1k, avg=54274.66, stdev=298868.22 00:19:34.658 clat (msec): min=122, max=8354, avg=4420.93, stdev=2402.28 00:19:34.658 lat (msec): min=1793, max=8356, avg=4475.20, stdev=2404.24 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 1787], 5.00th=[ 1821], 10.00th=[ 1838], 20.00th=[ 1938], 00:19:34.658 | 30.00th=[ 2056], 40.00th=[ 2232], 50.00th=[ 4396], 60.00th=[ 4732], 00:19:34.658 | 70.00th=[ 6544], 80.00th=[ 6812], 90.00th=[ 8221], 95.00th=[ 8288], 00:19:34.658 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:34.658 | 99.99th=[ 8356] 00:19:34.658 bw ( KiB/s): min=12288, max=110592, per=4.23%, avg=61440.00, stdev=69511.43, samples=2 00:19:34.658 iops : min= 12, max= 108, avg=60.00, stdev=67.88, samples=2 00:19:34.658 lat (msec) : 250=0.53%, 2000=26.60%, >=2000=72.87% 00:19:34.658 cpu : usr=0.00%, sys=0.68%, ctx=133, majf=0, minf=32769 00:19:34.658 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:19:34.658 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job4: (groupid=0, jobs=1): err= 0: pid=2792898: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=4, BW=4333KiB/s (4437kB/s)(52.0MiB/12288msec) 00:19:34.658 slat (usec): min=441, max=2270.1k, avg=195695.63, stdev=595063.76 00:19:34.658 clat (msec): min=2111, max=12284, avg=9496.77, stdev=2933.63 00:19:34.658 lat (msec): min=4122, max=12287, avg=9692.46, stdev=2765.91 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 2106], 5.00th=[ 4111], 10.00th=[ 6409], 20.00th=[ 6409], 00:19:34.658 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12147], 00:19:34.658 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:34.658 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:34.658 | 99.99th=[12281] 00:19:34.658 lat (msec) : >=2000=100.00% 00:19:34.658 cpu : usr=0.00%, sys=0.26%, ctx=46, majf=0, minf=13313 00:19:34.658 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:34.658 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job5: (groupid=0, jobs=1): err= 0: pid=2792899: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(362MiB/10275msec) 00:19:34.658 slat (usec): min=45, max=1980.3k, avg=27968.99, stdev=200009.27 00:19:34.658 clat (msec): min=147, max=8723, avg=2935.14, stdev=3374.12 00:19:34.658 lat (msec): min=189, max=8748, avg=2963.11, stdev=3377.83 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 190], 5.00th=[ 211], 10.00th=[ 224], 20.00th=[ 257], 00:19:34.658 | 30.00th=[ 317], 40.00th=[ 414], 50.00th=[ 489], 60.00th=[ 1250], 00:19:34.658 | 70.00th=[ 7550], 80.00th=[ 7617], 90.00th=[ 7752], 95.00th=[ 7752], 00:19:34.658 | 99.00th=[ 7752], 99.50th=[ 8557], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:34.658 | 99.99th=[ 8658] 00:19:34.658 bw ( KiB/s): min= 6144, max=241664, per=5.50%, avg=79869.17, stdev=112384.94, samples=6 00:19:34.658 iops : min= 6, max= 236, avg=77.83, stdev=109.88, samples=6 00:19:34.658 lat (msec) : 250=16.30%, 500=34.81%, 750=8.29%, 1000=0.28%, 2000=1.10% 00:19:34.658 lat (msec) : >=2000=39.23% 00:19:34.658 cpu : usr=0.03%, sys=1.09%, ctx=259, majf=0, minf=32769 00:19:34.658 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:34.658 issued rwts: total=362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job5: (groupid=0, jobs=1): err= 0: pid=2792900: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=8, BW=8752KiB/s (8962kB/s)(88.0MiB/10296msec) 00:19:34.658 slat (usec): min=450, max=1979.6k, avg=115228.39, stdev=439202.79 00:19:34.658 clat (msec): min=154, max=10291, avg=6045.04, stdev=2924.07 00:19:34.658 lat (msec): min=2133, max=10295, avg=6160.26, stdev=2888.87 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 155], 5.00th=[ 2140], 10.00th=[ 2299], 20.00th=[ 2299], 00:19:34.658 | 30.00th=[ 4463], 40.00th=[ 4463], 50.00th=[ 6409], 60.00th=[ 6544], 00:19:34.658 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[10268], 95.00th=[10268], 00:19:34.658 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.658 | 99.99th=[10268] 00:19:34.658 lat (msec) : 250=1.14%, >=2000=98.86% 00:19:34.658 cpu : usr=0.00%, sys=0.51%, ctx=53, majf=0, minf=22529 00:19:34.658 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:19:34.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.658 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.658 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.658 job5: (groupid=0, jobs=1): err= 0: pid=2792901: Wed Nov 20 12:33:38 2024 00:19:34.658 read: IOPS=13, BW=13.6MiB/s (14.2MB/s)(140MiB/10321msec) 00:19:34.658 slat (usec): min=126, max=2143.5k, avg=72646.35, stdev=360985.44 00:19:34.658 clat (msec): min=149, max=8718, avg=3841.61, stdev=2607.28 00:19:34.658 lat (msec): min=1975, max=10292, avg=3914.26, stdev=2644.80 00:19:34.658 clat percentiles (msec): 00:19:34.658 | 1.00th=[ 1972], 5.00th=[ 1989], 10.00th=[ 2005], 20.00th=[ 2039], 00:19:34.658 | 30.00th=[ 2072], 40.00th=[ 2106], 50.00th=[ 2198], 60.00th=[ 2265], 00:19:34.658 | 70.00th=[ 4463], 80.00th=[ 6745], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:34.658 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:34.658 | 99.99th=[ 8658] 00:19:34.658 bw ( KiB/s): min=24576, max=24576, per=1.69%, avg=24576.00, stdev= 0.00, samples=1 00:19:34.659 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=1 00:19:34.659 lat (msec) : 250=0.71%, 2000=7.86%, >=2000=91.43% 00:19:34.659 cpu : usr=0.00%, sys=0.91%, ctx=90, majf=0, minf=32769 00:19:34.659 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:19:34.659 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792902: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=8, BW=9119KiB/s (9338kB/s)(92.0MiB/10331msec) 00:19:34.659 slat (usec): min=443, max=2098.8k, avg=110577.37, stdev=438269.00 00:19:34.659 clat (msec): min=157, max=10328, avg=6102.10, stdev=3069.22 00:19:34.659 lat (msec): min=2123, max=10330, avg=6212.68, stdev=3035.77 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 157], 5.00th=[ 2123], 10.00th=[ 2299], 20.00th=[ 2299], 00:19:34.659 | 30.00th=[ 4329], 40.00th=[ 4463], 50.00th=[ 6544], 60.00th=[ 6544], 00:19:34.659 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:19:34.659 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.659 | 99.99th=[10268] 00:19:34.659 lat (msec) : 250=1.09%, >=2000=98.91% 00:19:34.659 cpu : usr=0.00%, sys=0.58%, ctx=63, majf=0, minf=23553 00:19:34.659 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.7%, 16=17.4%, 32=34.8%, >=64=31.5% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.659 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792903: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=19, BW=19.9MiB/s (20.9MB/s)(206MiB/10359msec) 00:19:34.659 slat (usec): min=73, max=2314.1k, avg=49541.83, stdev=286522.82 00:19:34.659 clat (msec): min=152, max=8593, avg=3523.41, stdev=2758.57 00:19:34.659 lat (msec): min=442, max=8611, avg=3572.95, stdev=2765.06 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 443], 5.00th=[ 1620], 10.00th=[ 1636], 20.00th=[ 1687], 00:19:34.659 | 30.00th=[ 1737], 40.00th=[ 1787], 50.00th=[ 1854], 60.00th=[ 1955], 00:19:34.659 | 70.00th=[ 4866], 80.00th=[ 6946], 90.00th=[ 8557], 95.00th=[ 8557], 00:19:34.659 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:34.659 | 99.99th=[ 8658] 00:19:34.659 bw ( KiB/s): min=159744, max=159744, per=11.00%, avg=159744.00, stdev= 0.00, samples=1 00:19:34.659 iops : min= 156, max= 156, avg=156.00, stdev= 0.00, samples=1 00:19:34.659 lat (msec) : 250=0.49%, 500=2.91%, 750=0.49%, 2000=61.65%, >=2000=34.47% 00:19:34.659 cpu : usr=0.03%, sys=0.81%, ctx=191, majf=0, minf=32769 00:19:34.659 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.8%, 32=15.5%, >=64=69.4% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:19:34.659 issued rwts: total=206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792904: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(126MiB/12357msec) 00:19:34.659 slat (usec): min=441, max=2004.1k, avg=80997.50, stdev=365910.44 00:19:34.659 clat (msec): min=2150, max=12355, avg=9289.67, stdev=3058.98 00:19:34.659 lat (msec): min=4127, max=12356, avg=9370.67, stdev=3003.03 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 4144], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:34.659 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10537], 60.00th=[10805], 00:19:34.659 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:19:34.659 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:34.659 | 99.99th=[12416] 00:19:34.659 lat (msec) : >=2000=100.00% 00:19:34.659 cpu : usr=0.01%, sys=0.51%, ctx=100, majf=0, minf=32257 00:19:34.659 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.3%, 16=12.7%, 32=25.4%, >=64=50.0% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.659 issued rwts: total=126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792905: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=2, BW=2682KiB/s (2747kB/s)(27.0MiB/10308msec) 00:19:34.659 slat (usec): min=663, max=2170.0k, avg=376246.43, stdev=762019.57 00:19:34.659 clat (msec): min=148, max=10268, avg=6382.57, stdev=3006.05 00:19:34.659 lat (msec): min=2133, max=10307, avg=6758.82, stdev=2826.17 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 148], 5.00th=[ 2140], 10.00th=[ 2140], 20.00th=[ 2265], 00:19:34.659 | 30.00th=[ 4463], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8557], 00:19:34.659 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[10268], 95.00th=[10268], 00:19:34.659 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.659 | 99.99th=[10268] 00:19:34.659 lat (msec) : 250=3.70%, >=2000=96.30% 00:19:34.659 cpu : usr=0.00%, sys=0.17%, ctx=46, majf=0, minf=6913 00:19:34.659 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:34.659 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792906: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=13, BW=13.6MiB/s (14.3MB/s)(141MiB/10371msec) 00:19:34.659 slat (usec): min=88, max=1977.9k, avg=72453.94, stdev=340918.67 00:19:34.659 clat (msec): min=154, max=10357, avg=6950.06, stdev=3024.70 00:19:34.659 lat (msec): min=1994, max=10358, avg=7022.51, stdev=2982.76 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 1989], 5.00th=[ 2123], 10.00th=[ 2265], 20.00th=[ 4463], 00:19:34.659 | 30.00th=[ 4463], 40.00th=[ 6544], 50.00th=[ 8221], 60.00th=[ 8658], 00:19:34.659 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:19:34.659 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:19:34.659 | 99.99th=[10402] 00:19:34.659 bw ( KiB/s): min=26624, max=26624, per=1.83%, avg=26624.00, stdev= 0.00, samples=1 00:19:34.659 iops : min= 26, max= 26, avg=26.00, stdev= 0.00, samples=1 00:19:34.659 lat (msec) : 250=0.71%, 2000=3.55%, >=2000=95.74% 00:19:34.659 cpu : usr=0.01%, sys=0.63%, ctx=111, majf=0, minf=32769 00:19:34.659 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.7%, 16=11.3%, 32=22.7%, >=64=55.3% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.7% 00:19:34.659 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792907: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=8, BW=8553KiB/s (8759kB/s)(86.0MiB/10296msec) 00:19:34.659 slat (usec): min=433, max=2104.4k, avg=117841.29, stdev=445411.01 00:19:34.659 clat (msec): min=160, max=10294, avg=6685.25, stdev=2911.96 00:19:34.659 lat (msec): min=2141, max=10295, avg=6803.09, stdev=2849.19 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 161], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2299], 00:19:34.659 | 30.00th=[ 6409], 40.00th=[ 6544], 50.00th=[ 6611], 60.00th=[ 8557], 00:19:34.659 | 70.00th=[ 8792], 80.00th=[ 8792], 90.00th=[10268], 95.00th=[10268], 00:19:34.659 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.659 | 99.99th=[10268] 00:19:34.659 lat (msec) : 250=1.16%, >=2000=98.84% 00:19:34.659 cpu : usr=0.00%, sys=0.49%, ctx=54, majf=0, minf=22017 00:19:34.659 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.659 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792908: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=103, BW=104MiB/s (109MB/s)(1065MiB/10270msec) 00:19:34.659 slat (usec): min=43, max=2004.5k, avg=9492.21, stdev=116324.67 00:19:34.659 clat (msec): min=86, max=5385, avg=812.31, stdev=1600.64 00:19:34.659 lat (msec): min=86, max=5385, avg=821.80, stdev=1609.11 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 92], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 120], 00:19:34.659 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 125], 60.00th=[ 157], 00:19:34.659 | 70.00th=[ 197], 80.00th=[ 236], 90.00th=[ 3977], 95.00th=[ 5336], 00:19:34.659 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:19:34.659 | 99.99th=[ 5403] 00:19:34.659 bw ( KiB/s): min=16384, max=976896, per=22.03%, avg=319838.83, stdev=455023.86, samples=6 00:19:34.659 iops : min= 16, max= 954, avg=312.33, stdev=444.37, samples=6 00:19:34.659 lat (msec) : 100=4.41%, 250=77.46%, 500=0.94%, 1000=1.50%, 2000=0.28% 00:19:34.659 lat (msec) : >=2000=15.40% 00:19:34.659 cpu : usr=0.03%, sys=1.55%, ctx=828, majf=0, minf=32769 00:19:34.659 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:19:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.659 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.659 issued rwts: total=1065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.659 job5: (groupid=0, jobs=1): err= 0: pid=2792909: Wed Nov 20 12:33:38 2024 00:19:34.659 read: IOPS=8, BW=8464KiB/s (8667kB/s)(85.0MiB/10284msec) 00:19:34.659 slat (usec): min=429, max=4289.4k, avg=119166.48, stdev=565326.28 00:19:34.659 clat (msec): min=153, max=10282, avg=7020.55, stdev=2063.66 00:19:34.659 lat (msec): min=4443, max=10283, avg=7139.72, stdev=1951.84 00:19:34.659 clat percentiles (msec): 00:19:34.659 | 1.00th=[ 155], 5.00th=[ 4463], 10.00th=[ 4463], 20.00th=[ 4463], 00:19:34.659 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6477], 00:19:34.659 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:19:34.659 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.659 | 99.99th=[10268] 00:19:34.659 lat (msec) : 250=1.18%, >=2000=98.82% 00:19:34.660 cpu : usr=0.00%, sys=0.46%, ctx=64, majf=0, minf=21761 00:19:34.660 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:19:34.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.660 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.660 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.660 job5: (groupid=0, jobs=1): err= 0: pid=2792910: Wed Nov 20 12:33:38 2024 00:19:34.660 read: IOPS=10, BW=10.5MiB/s (11.0MB/s)(108MiB/10320msec) 00:19:34.660 slat (usec): min=448, max=1978.5k, avg=94154.05, stdev=387516.34 00:19:34.660 clat (msec): min=150, max=10317, avg=5466.54, stdev=2598.91 00:19:34.660 lat (msec): min=2128, max=10319, avg=5560.69, stdev=2588.69 00:19:34.660 clat percentiles (msec): 00:19:34.660 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 2265], 00:19:34.660 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4279], 60.00th=[ 6342], 00:19:34.660 | 70.00th=[ 6544], 80.00th=[ 8557], 90.00th=[10134], 95.00th=[10268], 00:19:34.660 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:34.660 | 99.99th=[10268] 00:19:34.660 lat (msec) : 250=0.93%, >=2000=99.07% 00:19:34.660 cpu : usr=0.00%, sys=0.71%, ctx=83, majf=0, minf=27649 00:19:34.660 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.4%, 16=14.8%, 32=29.6%, >=64=41.7% 00:19:34.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.660 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:34.660 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.660 job5: (groupid=0, jobs=1): err= 0: pid=2792911: Wed Nov 20 12:33:38 2024 00:19:34.660 read: IOPS=114, BW=114MiB/s (120MB/s)(1178MiB/10311msec) 00:19:34.660 slat (usec): min=44, max=2106.0k, avg=8611.32, stdev=115085.91 00:19:34.660 clat (msec): min=106, max=5283, avg=698.38, stdev=1502.04 00:19:34.660 lat (msec): min=106, max=5284, avg=706.99, stdev=1510.83 00:19:34.660 clat percentiles (msec): 00:19:34.660 | 1.00th=[ 113], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 123], 00:19:34.660 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:19:34.660 | 70.00th=[ 128], 80.00th=[ 167], 90.00th=[ 2970], 95.00th=[ 5201], 00:19:34.660 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:19:34.660 | 99.99th=[ 5269] 00:19:34.660 bw ( KiB/s): min= 4096, max=1044480, per=24.68%, avg=358400.00, stdev=452032.15, samples=6 00:19:34.660 iops : min= 4, max= 1020, avg=350.00, stdev=441.44, samples=6 00:19:34.660 lat (msec) : 250=83.28%, 500=2.63%, 1000=1.10%, >=2000=12.99% 00:19:34.660 cpu : usr=0.06%, sys=1.66%, ctx=972, majf=0, minf=32769 00:19:34.660 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:19:34.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.660 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.660 issued rwts: total=1178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.660 00:19:34.660 Run status group 0 (all jobs): 00:19:34.660 READ: bw=1418MiB/s (1487MB/s), 1242KiB/s-224MiB/s (1272kB/s-235MB/s), io=17.3GiB (18.5GB), run=10014-12463msec 00:19:34.660 00:19:34.660 Disk stats (read/write): 00:19:34.660 nvme0n1: ios=31198/0, merge=0/0, ticks=5305694/0, in_queue=5305694, util=98.80% 00:19:34.660 nvme1n1: ios=31396/0, merge=0/0, ticks=9536687/0, in_queue=9536687, util=99.02% 00:19:34.660 nvme2n1: ios=10489/0, merge=0/0, ticks=9268334/0, in_queue=9268334, util=99.09% 00:19:34.660 nvme3n1: ios=10788/0, merge=0/0, ticks=10890503/0, in_queue=10890503, util=99.05% 00:19:34.660 nvme4n1: ios=25828/0, merge=0/0, ticks=9475001/0, in_queue=9475001, util=99.23% 00:19:34.660 nvme5n1: ios=29628/0, merge=0/0, ticks=9860891/0, in_queue=9860891, util=99.35% 00:19:34.660 12:33:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:19:34.660 12:33:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:34.660 12:33:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:34.660 12:33:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:34.660 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:34.660 12:33:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:34.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:34.918 12:33:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:35.852 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:35.852 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:36.109 12:33:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:37.043 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:37.043 12:33:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:37.977 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:37.977 12:33:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:38.910 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.910 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:38.910 rmmod nvme_rdma 00:19:39.168 rmmod nvme_fabrics 00:19:39.168 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.168 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:19:39.168 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:19:39.168 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 2792066 ']' 00:19:39.168 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 2792066 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 2792066 ']' 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 2792066 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2792066 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2792066' 00:19:39.169 killing process with pid 2792066 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 2792066 00:19:39.169 12:33:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 2792066 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:39.427 00:19:39.427 real 0m29.360s 00:19:39.427 user 1m46.430s 00:19:39.427 sys 0m8.756s 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:39.427 ************************************ 00:19:39.427 END TEST nvmf_srq_overwhelm 00:19:39.427 ************************************ 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.427 ************************************ 00:19:39.427 START TEST nvmf_shutdown 00:19:39.427 ************************************ 00:19:39.427 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:39.786 * Looking for test storage... 00:19:39.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.786 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.787 --rc genhtml_branch_coverage=1 00:19:39.787 --rc genhtml_function_coverage=1 00:19:39.787 --rc genhtml_legend=1 00:19:39.787 --rc geninfo_all_blocks=1 00:19:39.787 --rc geninfo_unexecuted_blocks=1 00:19:39.787 00:19:39.787 ' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.787 --rc genhtml_branch_coverage=1 00:19:39.787 --rc genhtml_function_coverage=1 00:19:39.787 --rc genhtml_legend=1 00:19:39.787 --rc geninfo_all_blocks=1 00:19:39.787 --rc geninfo_unexecuted_blocks=1 00:19:39.787 00:19:39.787 ' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.787 --rc genhtml_branch_coverage=1 00:19:39.787 --rc genhtml_function_coverage=1 00:19:39.787 --rc genhtml_legend=1 00:19:39.787 --rc geninfo_all_blocks=1 00:19:39.787 --rc geninfo_unexecuted_blocks=1 00:19:39.787 00:19:39.787 ' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.787 --rc genhtml_branch_coverage=1 00:19:39.787 --rc genhtml_function_coverage=1 00:19:39.787 --rc genhtml_legend=1 00:19:39.787 --rc geninfo_all_blocks=1 00:19:39.787 --rc geninfo_unexecuted_blocks=1 00:19:39.787 00:19:39.787 ' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.787 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.787 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:39.787 ************************************ 00:19:39.787 START TEST nvmf_shutdown_tc1 00:19:39.787 ************************************ 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.788 12:33:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:19:42.324 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:19:42.324 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:19:42.324 Found net devices under 0000:83:00.0: mlx_0_0 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:42.324 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:19:42.325 Found net devices under 0000:83:00.1: mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:42.325 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:42.325 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:19:42.325 altname enp131s0f0np0 00:19:42.325 inet 192.168.100.8/24 scope global mlx_0_0 00:19:42.325 valid_lft forever preferred_lft forever 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:42.325 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:42.325 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:19:42.325 altname enp131s0f1np1 00:19:42.325 inet 192.168.100.9/24 scope global mlx_0_1 00:19:42.325 valid_lft forever preferred_lft forever 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:42.325 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:42.326 192.168.100.9' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:42.326 192.168.100.9' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:42.326 192.168.100.9' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2796023 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2796023 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2796023 ']' 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.326 12:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 [2024-11-20 12:33:47.771397] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:42.326 [2024-11-20 12:33:47.771497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.326 [2024-11-20 12:33:47.889106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.326 [2024-11-20 12:33:48.000036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.326 [2024-11-20 12:33:48.000137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.326 [2024-11-20 12:33:48.000172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.326 [2024-11-20 12:33:48.000203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.326 [2024-11-20 12:33:48.000229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.326 [2024-11-20 12:33:48.002605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.326 [2024-11-20 12:33:48.002661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.326 [2024-11-20 12:33:48.002742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:42.326 [2024-11-20 12:33:48.002752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.584 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.584 [2024-11-20 12:33:48.177756] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16610d0/0x16655c0) succeed. 00:19:42.584 [2024-11-20 12:33:48.192287] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1662760/0x16a6c60) succeed. 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.843 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.843 Malloc1 00:19:42.843 [2024-11-20 12:33:48.449881] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:42.843 Malloc2 00:19:42.843 Malloc3 00:19:42.843 Malloc4 00:19:43.101 Malloc5 00:19:43.102 Malloc6 00:19:43.102 Malloc7 00:19:43.102 Malloc8 00:19:43.102 Malloc9 00:19:43.102 Malloc10 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2796167 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2796167 /var/tmp/bdevperf.sock 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2796167 ']' 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.361 "hdgst": ${hdgst:-false}, 00:19:43.361 "ddgst": ${ddgst:-false} 00:19:43.361 }, 00:19:43.361 "method": "bdev_nvme_attach_controller" 00:19:43.361 } 00:19:43.361 EOF 00:19:43.361 )") 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.361 "hdgst": ${hdgst:-false}, 00:19:43.361 "ddgst": ${ddgst:-false} 00:19:43.361 }, 00:19:43.361 "method": "bdev_nvme_attach_controller" 00:19:43.361 } 00:19:43.361 EOF 00:19:43.361 )") 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.361 "hdgst": ${hdgst:-false}, 00:19:43.361 "ddgst": ${ddgst:-false} 00:19:43.361 }, 00:19:43.361 "method": "bdev_nvme_attach_controller" 00:19:43.361 } 00:19:43.361 EOF 00:19:43.361 )") 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.361 "hdgst": ${hdgst:-false}, 00:19:43.361 "ddgst": ${ddgst:-false} 00:19:43.361 }, 00:19:43.361 "method": "bdev_nvme_attach_controller" 00:19:43.361 } 00:19:43.361 EOF 00:19:43.361 )") 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.361 "hdgst": ${hdgst:-false}, 00:19:43.361 "ddgst": ${ddgst:-false} 00:19:43.361 }, 00:19:43.361 "method": "bdev_nvme_attach_controller" 00:19:43.361 } 00:19:43.361 EOF 00:19:43.361 )") 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.361 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.361 { 00:19:43.361 "params": { 00:19:43.361 "name": "Nvme$subsystem", 00:19:43.361 "trtype": "$TEST_TRANSPORT", 00:19:43.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.361 "adrfam": "ipv4", 00:19:43.361 "trsvcid": "$NVMF_PORT", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.362 "hdgst": ${hdgst:-false}, 00:19:43.362 "ddgst": ${ddgst:-false} 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 } 00:19:43.362 EOF 00:19:43.362 )") 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.362 { 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme$subsystem", 00:19:43.362 "trtype": "$TEST_TRANSPORT", 00:19:43.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "$NVMF_PORT", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.362 "hdgst": ${hdgst:-false}, 00:19:43.362 "ddgst": ${ddgst:-false} 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 } 00:19:43.362 EOF 00:19:43.362 )") 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.362 { 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme$subsystem", 00:19:43.362 "trtype": "$TEST_TRANSPORT", 00:19:43.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "$NVMF_PORT", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.362 "hdgst": ${hdgst:-false}, 00:19:43.362 "ddgst": ${ddgst:-false} 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 } 00:19:43.362 EOF 00:19:43.362 )") 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.362 { 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme$subsystem", 00:19:43.362 "trtype": "$TEST_TRANSPORT", 00:19:43.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "$NVMF_PORT", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.362 "hdgst": ${hdgst:-false}, 00:19:43.362 "ddgst": ${ddgst:-false} 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 } 00:19:43.362 EOF 00:19:43.362 )") 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:43.362 { 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme$subsystem", 00:19:43.362 "trtype": "$TEST_TRANSPORT", 00:19:43.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "$NVMF_PORT", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.362 "hdgst": ${hdgst:-false}, 00:19:43.362 "ddgst": ${ddgst:-false} 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 } 00:19:43.362 EOF 00:19:43.362 )") 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:43.362 12:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme1", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme2", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme3", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme4", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme5", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme6", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme7", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme8", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme9", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 },{ 00:19:43.362 "params": { 00:19:43.362 "name": "Nvme10", 00:19:43.362 "trtype": "rdma", 00:19:43.362 "traddr": "192.168.100.8", 00:19:43.362 "adrfam": "ipv4", 00:19:43.362 "trsvcid": "4420", 00:19:43.362 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:43.362 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:43.362 "hdgst": false, 00:19:43.362 "ddgst": false 00:19:43.362 }, 00:19:43.362 "method": "bdev_nvme_attach_controller" 00:19:43.362 }' 00:19:43.362 [2024-11-20 12:33:48.963650] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:43.362 [2024-11-20 12:33:48.963741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:43.362 [2024-11-20 12:33:49.040396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.363 [2024-11-20 12:33:49.103251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2796167 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:44.294 12:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:45.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2796167 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:45.227 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2796023 00:19:45.227 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:45.227 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:45.227 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:45.227 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.487 { 00:19:45.487 "params": { 00:19:45.487 "name": "Nvme$subsystem", 00:19:45.487 "trtype": "$TEST_TRANSPORT", 00:19:45.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.487 "adrfam": "ipv4", 00:19:45.487 "trsvcid": "$NVMF_PORT", 00:19:45.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.487 "hdgst": ${hdgst:-false}, 00:19:45.487 "ddgst": ${ddgst:-false} 00:19:45.487 }, 00:19:45.487 "method": "bdev_nvme_attach_controller" 00:19:45.487 } 00:19:45.487 EOF 00:19:45.487 )") 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.487 { 00:19:45.487 "params": { 00:19:45.487 "name": "Nvme$subsystem", 00:19:45.487 "trtype": "$TEST_TRANSPORT", 00:19:45.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.487 "adrfam": "ipv4", 00:19:45.487 "trsvcid": "$NVMF_PORT", 00:19:45.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.487 "hdgst": ${hdgst:-false}, 00:19:45.487 "ddgst": ${ddgst:-false} 00:19:45.487 }, 00:19:45.487 "method": "bdev_nvme_attach_controller" 00:19:45.487 } 00:19:45.487 EOF 00:19:45.487 )") 00:19:45.487 12:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.487 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.487 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.487 { 00:19:45.487 "params": { 00:19:45.487 "name": "Nvme$subsystem", 00:19:45.487 "trtype": "$TEST_TRANSPORT", 00:19:45.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.487 "adrfam": "ipv4", 00:19:45.487 "trsvcid": "$NVMF_PORT", 00:19:45.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.488 { 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme$subsystem", 00:19:45.488 "trtype": "$TEST_TRANSPORT", 00:19:45.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.488 "adrfam": "ipv4", 00:19:45.488 "trsvcid": "$NVMF_PORT", 00:19:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.488 "hdgst": ${hdgst:-false}, 00:19:45.488 "ddgst": ${ddgst:-false} 00:19:45.488 }, 00:19:45.488 "method": "bdev_nvme_attach_controller" 00:19:45.488 } 00:19:45.488 EOF 00:19:45.488 )") 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:45.488 12:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:45.488 "params": { 00:19:45.488 "name": "Nvme1", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme2", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme3", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme4", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme5", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme6", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme7", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme8", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme9", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 },{ 00:19:45.489 "params": { 00:19:45.489 "name": "Nvme10", 00:19:45.489 "trtype": "rdma", 00:19:45.489 "traddr": "192.168.100.8", 00:19:45.489 "adrfam": "ipv4", 00:19:45.489 "trsvcid": "4420", 00:19:45.489 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:45.489 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:45.489 "hdgst": false, 00:19:45.489 "ddgst": false 00:19:45.489 }, 00:19:45.489 "method": "bdev_nvme_attach_controller" 00:19:45.489 }' 00:19:45.489 [2024-11-20 12:33:51.043666] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:45.489 [2024-11-20 12:33:51.043770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796387 ] 00:19:45.489 [2024-11-20 12:33:51.119510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.489 [2024-11-20 12:33:51.183878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.424 Running I/O for 1 seconds... 00:19:47.861 1966.00 IOPS, 122.88 MiB/s 00:19:47.862 Latency(us) 00:19:47.862 [2024-11-20T11:33:53.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme1n1 : 1.25 223.19 13.95 0.00 0.00 276240.13 47962.64 299815.06 00:19:47.862 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme2n1 : 1.26 232.42 14.53 0.00 0.00 261503.56 5461.33 290494.39 00:19:47.862 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme3n1 : 1.27 270.09 16.88 0.00 0.00 223911.77 8980.86 211268.65 00:19:47.862 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme4n1 : 1.28 267.41 16.71 0.00 0.00 221790.03 5873.97 195734.19 00:19:47.862 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme5n1 : 1.28 256.98 16.06 0.00 0.00 226006.40 13981.01 177869.56 00:19:47.862 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme6n1 : 1.28 259.86 16.24 0.00 0.00 219435.06 13495.56 160781.65 00:19:47.862 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme7n1 : 1.28 262.71 16.42 0.00 0.00 213049.21 13107.20 145247.19 00:19:47.862 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme8n1 : 1.28 268.62 16.79 0.00 0.00 204413.61 13010.11 152237.70 00:19:47.862 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme9n1 : 1.27 252.14 15.76 0.00 0.00 214538.16 16019.91 166995.44 00:19:47.862 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:47.862 Verification LBA range: start 0x0 length 0x400 00:19:47.862 Nvme10n1 : 1.27 201.28 12.58 0.00 0.00 262826.67 17573.36 310689.19 00:19:47.862 [2024-11-20T11:33:53.628Z] =================================================================================================================== 00:19:47.862 [2024-11-20T11:33:53.628Z] Total : 2494.71 155.92 0.00 0.00 230439.18 5461.33 310689.19 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:48.120 rmmod nvme_rdma 00:19:48.120 rmmod nvme_fabrics 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2796023 ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2796023 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2796023 ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2796023 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796023 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.120 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796023' 00:19:48.120 killing process with pid 2796023 00:19:48.121 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2796023 00:19:48.121 12:33:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2796023 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:48.693 00:19:48.693 real 0m8.968s 00:19:48.693 user 0m28.514s 00:19:48.693 sys 0m2.853s 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:48.693 ************************************ 00:19:48.693 END TEST nvmf_shutdown_tc1 00:19:48.693 ************************************ 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:48.693 ************************************ 00:19:48.693 START TEST nvmf_shutdown_tc2 00:19:48.693 ************************************ 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.693 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:19:48.694 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:19:48.694 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:19:48.694 Found net devices under 0000:83:00.0: mlx_0_0 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:19:48.694 Found net devices under 0000:83:00.1: mlx_0_1 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:48.694 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:48.694 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.694 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:19:48.694 altname enp131s0f0np0 00:19:48.694 inet 192.168.100.8/24 scope global mlx_0_0 00:19:48.694 valid_lft forever preferred_lft forever 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:48.695 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.695 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:19:48.695 altname enp131s0f1np1 00:19:48.695 inet 192.168.100.9/24 scope global mlx_0_1 00:19:48.695 valid_lft forever preferred_lft forever 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:48.695 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:48.954 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:48.955 192.168.100.9' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:48.955 192.168.100.9' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:48.955 192.168.100.9' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2796781 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2796781 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2796781 ']' 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.955 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.955 [2024-11-20 12:33:54.586826] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:48.955 [2024-11-20 12:33:54.586928] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.955 [2024-11-20 12:33:54.662788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.213 [2024-11-20 12:33:54.727076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.213 [2024-11-20 12:33:54.727133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.213 [2024-11-20 12:33:54.727148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.213 [2024-11-20 12:33:54.727161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.213 [2024-11-20 12:33:54.727172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.213 [2024-11-20 12:33:54.728519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.213 [2024-11-20 12:33:54.728601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.213 [2024-11-20 12:33:54.728684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.213 [2024-11-20 12:33:54.728718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.213 12:33:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.213 [2024-11-20 12:33:54.926340] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b4f0d0/0x1b535c0) succeed. 00:19:49.213 [2024-11-20 12:33:54.941771] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b50760/0x1b94c60) succeed. 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.471 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.472 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.472 Malloc1 00:19:49.472 [2024-11-20 12:33:55.199801] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:49.472 Malloc2 00:19:49.730 Malloc3 00:19:49.730 Malloc4 00:19:49.730 Malloc5 00:19:49.730 Malloc6 00:19:49.730 Malloc7 00:19:49.989 Malloc8 00:19:49.989 Malloc9 00:19:49.989 Malloc10 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2796921 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2796921 /var/tmp/bdevperf.sock 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2796921 ']' 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.989 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:49.990 { 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme$subsystem", 00:19:49.990 "trtype": "$TEST_TRANSPORT", 00:19:49.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "$NVMF_PORT", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.990 "hdgst": ${hdgst:-false}, 00:19:49.990 "ddgst": ${ddgst:-false} 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 } 00:19:49.990 EOF 00:19:49.990 )") 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:19:49.990 12:33:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:49.990 "params": { 00:19:49.990 "name": "Nvme1", 00:19:49.990 "trtype": "rdma", 00:19:49.990 "traddr": "192.168.100.8", 00:19:49.990 "adrfam": "ipv4", 00:19:49.990 "trsvcid": "4420", 00:19:49.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.990 "hdgst": false, 00:19:49.990 "ddgst": false 00:19:49.990 }, 00:19:49.990 "method": "bdev_nvme_attach_controller" 00:19:49.990 },{ 00:19:49.990 "params": { 00:19:49.991 "name": "Nvme2", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme3", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme4", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme5", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme6", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme7", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme8", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme9", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 },{ 00:19:49.991 "params": { 00:19:49.991 "name": "Nvme10", 00:19:49.991 "trtype": "rdma", 00:19:49.991 "traddr": "192.168.100.8", 00:19:49.991 "adrfam": "ipv4", 00:19:49.991 "trsvcid": "4420", 00:19:49.991 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.991 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.991 "hdgst": false, 00:19:49.991 "ddgst": false 00:19:49.991 }, 00:19:49.991 "method": "bdev_nvme_attach_controller" 00:19:49.991 }' 00:19:49.991 [2024-11-20 12:33:55.723804] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:49.991 [2024-11-20 12:33:55.723891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796921 ] 00:19:50.249 [2024-11-20 12:33:55.797909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.249 [2024-11-20 12:33:55.860672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.186 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.186 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:19:51.186 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:51.186 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.186 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:51.186 Running I/O for 10 seconds... 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.446 12:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:51.446 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.446 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:19:51.447 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:19:51.447 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.707 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:51.968 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.968 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=99 00:19:51.968 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 99 -ge 100 ']' 00:19:51.968 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.228 12:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.488 1977.00 IOPS, 123.56 MiB/s [2024-11-20T11:33:58.254Z] 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=227 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 227 -ge 100 ']' 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2796921 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2796921 ']' 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2796921 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796921 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796921' 00:19:52.488 killing process with pid 2796921 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2796921 00:19:52.488 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2796921 00:19:52.749 Received shutdown signal, test time was about 1.493648 seconds 00:19:52.749 00:19:52.749 Latency(us) 00:19:52.749 [2024-11-20T11:33:58.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme1n1 : 1.47 239.13 14.95 0.00 0.00 263463.82 13786.83 307582.29 00:19:52.749 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme2n1 : 1.47 238.83 14.93 0.00 0.00 259707.36 13981.01 298261.62 00:19:52.749 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme3n1 : 1.48 260.17 16.26 0.00 0.00 234891.25 7961.41 211268.65 00:19:52.749 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme4n1 : 1.48 262.56 16.41 0.00 0.00 228713.01 5097.24 200394.52 00:19:52.749 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme5n1 : 1.48 259.41 16.21 0.00 0.00 228713.81 15728.64 181753.17 00:19:52.749 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.749 Verification LBA range: start 0x0 length 0x400 00:19:52.749 Nvme6n1 : 1.48 258.97 16.19 0.00 0.00 225259.77 16893.72 163111.82 00:19:52.749 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.750 Verification LBA range: start 0x0 length 0x400 00:19:52.750 Nvme7n1 : 1.49 258.58 16.16 0.00 0.00 221444.17 17767.54 146800.64 00:19:52.750 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.750 Verification LBA range: start 0x0 length 0x400 00:19:52.750 Nvme8n1 : 1.49 258.25 16.14 0.00 0.00 217291.92 18252.99 153014.42 00:19:52.750 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.750 Verification LBA range: start 0x0 length 0x400 00:19:52.750 Nvme9n1 : 1.49 257.77 16.11 0.00 0.00 215335.82 19418.07 166218.71 00:19:52.750 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.750 Verification LBA range: start 0x0 length 0x400 00:19:52.750 Nvme10n1 : 1.49 214.41 13.40 0.00 0.00 254102.34 14757.74 313796.08 00:19:52.750 [2024-11-20T11:33:58.516Z] =================================================================================================================== 00:19:52.750 [2024-11-20T11:33:58.516Z] Total : 2508.08 156.75 0.00 0.00 234095.09 5097.24 313796.08 00:19:53.010 12:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:53.953 rmmod nvme_rdma 00:19:53.953 rmmod nvme_fabrics 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2796781 ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2796781 ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796781' 00:19:53.953 killing process with pid 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2796781 00:19:53.953 12:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2796781 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:54.524 00:19:54.524 real 0m5.796s 00:19:54.524 user 0m23.883s 00:19:54.524 sys 0m1.033s 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.524 ************************************ 00:19:54.524 END TEST nvmf_shutdown_tc2 00:19:54.524 ************************************ 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.524 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.525 ************************************ 00:19:54.525 START TEST nvmf_shutdown_tc3 00:19:54.525 ************************************ 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:19:54.525 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:19:54.525 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.525 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:19:54.525 Found net devices under 0000:83:00.0: mlx_0_0 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:19:54.526 Found net devices under 0000:83:00.1: mlx_0_1 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:54.526 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:54.788 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:54.788 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:19:54.788 altname enp131s0f0np0 00:19:54.788 inet 192.168.100.8/24 scope global mlx_0_0 00:19:54.788 valid_lft forever preferred_lft forever 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:54.788 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:54.788 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:19:54.788 altname enp131s0f1np1 00:19:54.788 inet 192.168.100.9/24 scope global mlx_0_1 00:19:54.788 valid_lft forever preferred_lft forever 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:54.788 192.168.100.9' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:54.788 192.168.100.9' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:54.788 192.168.100.9' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2797425 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2797425 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2797425 ']' 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.788 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.788 [2024-11-20 12:34:00.452536] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:54.788 [2024-11-20 12:34:00.452634] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.788 [2024-11-20 12:34:00.525475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.049 [2024-11-20 12:34:00.590063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.049 [2024-11-20 12:34:00.590122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.049 [2024-11-20 12:34:00.590138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.049 [2024-11-20 12:34:00.590151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.049 [2024-11-20 12:34:00.590162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.049 [2024-11-20 12:34:00.591510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.049 [2024-11-20 12:34:00.591598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.049 [2024-11-20 12:34:00.591687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.049 [2024-11-20 12:34:00.591693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.049 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.049 [2024-11-20 12:34:00.790222] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12580d0/0x125c5c0) succeed. 00:19:55.049 [2024-11-20 12:34:00.805545] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1259760/0x129dc60) succeed. 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.310 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:55.311 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:55.311 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:55.311 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.311 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.311 Malloc1 00:19:55.311 [2024-11-20 12:34:01.072708] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:55.571 Malloc2 00:19:55.571 Malloc3 00:19:55.571 Malloc4 00:19:55.571 Malloc5 00:19:55.571 Malloc6 00:19:55.831 Malloc7 00:19:55.831 Malloc8 00:19:55.831 Malloc9 00:19:55.831 Malloc10 00:19:55.831 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.831 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:55.831 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.831 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.831 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2797575 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2797575 /var/tmp/bdevperf.sock 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2797575 ']' 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.832 { 00:19:55.832 "params": { 00:19:55.832 "name": "Nvme$subsystem", 00:19:55.832 "trtype": "$TEST_TRANSPORT", 00:19:55.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.832 "adrfam": "ipv4", 00:19:55.832 "trsvcid": "$NVMF_PORT", 00:19:55.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.832 "hdgst": ${hdgst:-false}, 00:19:55.832 "ddgst": ${ddgst:-false} 00:19:55.832 }, 00:19:55.832 "method": "bdev_nvme_attach_controller" 00:19:55.832 } 00:19:55.832 EOF 00:19:55.832 )") 00:19:55.832 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.833 { 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme$subsystem", 00:19:55.833 "trtype": "$TEST_TRANSPORT", 00:19:55.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "$NVMF_PORT", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.833 "hdgst": ${hdgst:-false}, 00:19:55.833 "ddgst": ${ddgst:-false} 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 } 00:19:55.833 EOF 00:19:55.833 )") 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:19:55.833 12:34:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme1", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme2", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme3", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme4", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme5", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme6", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme7", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme8", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme9", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 },{ 00:19:55.833 "params": { 00:19:55.833 "name": "Nvme10", 00:19:55.833 "trtype": "rdma", 00:19:55.833 "traddr": "192.168.100.8", 00:19:55.833 "adrfam": "ipv4", 00:19:55.833 "trsvcid": "4420", 00:19:55.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:55.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:55.833 "hdgst": false, 00:19:55.833 "ddgst": false 00:19:55.833 }, 00:19:55.833 "method": "bdev_nvme_attach_controller" 00:19:55.833 }' 00:19:55.833 [2024-11-20 12:34:01.580406] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:55.833 [2024-11-20 12:34:01.580512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797575 ] 00:19:56.093 [2024-11-20 12:34:01.654911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.093 [2024-11-20 12:34:01.717821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.033 Running I/O for 10 seconds... 00:19:57.033 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.033 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:19:57.033 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:57.033 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.033 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.294 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:19:57.295 12:34:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.554 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.814 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.814 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=91 00:19:57.814 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 91 -ge 100 ']' 00:19:57.814 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.074 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.336 2413.00 IOPS, 150.81 MiB/s [2024-11-20T11:34:04.102Z] 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=219 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 219 -ge 100 ']' 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2797425 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2797425 ']' 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2797425 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2797425 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2797425' 00:19:58.336 killing process with pid 2797425 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2797425 00:19:58.336 12:34:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2797425 00:19:58.907 12:34:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:19:59.436 1591.50 IOPS, 99.47 MiB/s [2024-11-20T11:34:05.202Z] [2024-11-20 12:34:05.056356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.056420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.056441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.056457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.056472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.056496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.056513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.056528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.059099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.436 [2024-11-20 12:34:05.059136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:59.436 [2024-11-20 12:34:05.059172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.059192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.059209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.059241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.059255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.059271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.062028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.436 [2024-11-20 12:34:05.062063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:59.436 [2024-11-20 12:34:05.062095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.062115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.062133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.062148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.062163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.062177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.062193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.062207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.064412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.436 [2024-11-20 12:34:05.064447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:59.436 [2024-11-20 12:34:05.064488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.064510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.064527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.064542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.064557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.064572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.064588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.436 [2024-11-20 12:34:05.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.436 [2024-11-20 12:34:05.066975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.436 [2024-11-20 12:34:05.067010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:59.436 [2024-11-20 12:34:05.067045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.067066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.067089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.067105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.067121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.067137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.067152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.067167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.069633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.069660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.069688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.069706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.069722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.069737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.069753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.069767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.069782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.069796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.072113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.072147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.072176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.072195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.072211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.072226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.072242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.072256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.072271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.072294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.074208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.074233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.074263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.074283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.074299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.074314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.074329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.074344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.074359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.074374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.076257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.076283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.076309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.076328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.076344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.076359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.076374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.076389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.076404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.076418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.078782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.078808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.078837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.078857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.078873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.078893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.078910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.078924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.078940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.437 [2024-11-20 12:34:05.078954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.081847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.437 [2024-11-20 12:34:05.081880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:59.437 [2024-11-20 12:34:05.084771] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:59.437 [2024-11-20 12:34:05.087138] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:59.437 [2024-11-20 12:34:05.088893] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:19:59.437 [2024-11-20 12:34:05.091010] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:59.437 [2024-11-20 12:34:05.091120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001adf780 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001acf700 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001abf680 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aaf600 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a9f580 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a8f500 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a7f480 len:0x10000 key:0x183000 00:19:59.437 [2024-11-20 12:34:05.091401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.437 [2024-11-20 12:34:05.091424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a6f400 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a5f380 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a4f300 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a3f280 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a2f200 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a1f180 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a0f100 len:0x10000 key:0x183000 00:19:59.438 [2024-11-20 12:34:05.091690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001df0000 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ddff80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dcff00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dbfe80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dafe00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d9fd80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d8fd00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.091975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.091998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d7fc80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d6fc00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d5fb80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d4fb00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d3fa80 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d2fa00 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d1f980 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d0f900 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cff880 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cef800 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cdf780 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ccf700 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cbf680 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001caf600 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c9f580 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c8f500 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c7f480 len:0x10000 key:0x183100 00:19:59.438 [2024-11-20 12:34:05.092726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.438 [2024-11-20 12:34:05.092749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c6f400 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c5f380 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c4f300 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c3f280 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c2f200 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c1f180 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.092971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.092994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c0f100 len:0x10000 key:0x183100 00:19:59.439 [2024-11-20 12:34:05.093011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ff0000 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fdff80 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fcff00 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fbfe80 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fafe00 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f9fd80 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f8fd00 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f7fc80 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f6fc00 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f5fb80 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.093416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b6fc00 len:0x10000 key:0x183000 00:19:59.439 [2024-11-20 12:34:05.093455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbf000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9e000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf7d000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf5c000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf3b000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf1a000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef9000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.093790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed8000 len:0x10000 key:0x183a00 00:19:59.439 [2024-11-20 12:34:05.093808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.096914] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:59.439 [2024-11-20 12:34:05.096949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eff880 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.096968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.096995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eef800 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001edf780 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ecf700 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ebf680 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eaf600 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e9f580 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e8f500 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e7f480 len:0x10000 key:0x182b00 00:19:59.439 [2024-11-20 12:34:05.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.439 [2024-11-20 12:34:05.097322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e6f400 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e5f380 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e4f300 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e3f280 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e2f200 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e1f180 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e0f100 len:0x10000 key:0x182b00 00:19:59.440 [2024-11-20 12:34:05.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021f0000 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021dff80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021cff00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021bfe80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021afe00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100219fd80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100218fd00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100217fc80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100216fc00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.097981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100215fb80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.097998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100214fb00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100213fa80 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100212fa00 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100211f980 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100210f900 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020ff880 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020ef800 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020df780 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020cf700 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020bf680 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020af600 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100209f580 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100208f500 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100207f480 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100206f400 len:0x10000 key:0x184300 00:19:59.440 [2024-11-20 12:34:05.098620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.440 [2024-11-20 12:34:05.098642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100205f380 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100204f300 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100203f280 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100202f200 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100201f180 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100200f100 len:0x10000 key:0x184300 00:19:59.441 [2024-11-20 12:34:05.098862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023f0000 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.098902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023dff80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.098941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.098964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023cff00 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.098980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023bfe80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023afe00 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100239fd80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100238fd00 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100237fc80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100236fc00 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100235fb80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100234fb00 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100233fa80 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.099353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f4fb00 len:0x10000 key:0x182b00 00:19:59.441 [2024-11-20 12:34:05.099393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3df000 len:0x10000 key:0x183a00 00:19:59.441 [2024-11-20 12:34:05.099432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3be000 len:0x10000 key:0x183a00 00:19:59.441 [2024-11-20 12:34:05.099474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c39d000 len:0x10000 key:0x183a00 00:19:59.441 [2024-11-20 12:34:05.099526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.099549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c37c000 len:0x10000 key:0x183a00 00:19:59.441 [2024-11-20 12:34:05.099567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102581] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:59.441 [2024-11-20 12:34:05.102616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100229f580 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100228f500 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100227f480 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100226f400 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100225f380 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100224f300 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100223f280 len:0x10000 key:0x184000 00:19:59.441 [2024-11-20 12:34:05.102905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.441 [2024-11-20 12:34:05.102928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100222f200 len:0x10000 key:0x184000 00:19:59.442 [2024-11-20 12:34:05.102945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.102968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221f180 len:0x10000 key:0x184000 00:19:59.442 [2024-11-20 12:34:05.102985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220f100 len:0x10000 key:0x184000 00:19:59.442 [2024-11-20 12:34:05.103025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025f0000 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cff00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfe80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afe00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fd80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fd00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fc80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256fc00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255fb80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254fb00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253fa80 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252fa00 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f980 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f900 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff880 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ef800 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024df780 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024cf700 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024bf680 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024af600 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100249f580 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100248f500 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.103968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.103990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100247f480 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100246f400 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100245f380 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100244f300 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100243f280 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100242f200 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100241f180 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100240f100 len:0x10000 key:0x184100 00:19:59.442 [2024-11-20 12:34:05.104298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027f0000 len:0x10000 key:0x184c00 00:19:59.442 [2024-11-20 12:34:05.104338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.442 [2024-11-20 12:34:05.104361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027dff80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027cff00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027bfe80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027afe00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100279fd80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100278fd00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100277fc80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100276fc00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100275fb80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100274fb00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100273fa80 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100272fa00 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100271f980 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.104881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100232fa00 len:0x10000 key:0x184000 00:19:59.443 [2024-11-20 12:34:05.104920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7ff000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.104959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.104986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7de000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7bd000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c79c000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c77b000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c75a000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c739000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.105233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c718000 len:0x10000 key:0x183a00 00:19:59.443 [2024-11-20 12:34:05.105250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108192] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:59.443 [2024-11-20 12:34:05.108227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x184c00 00:19:59.443 [2024-11-20 12:34:05.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fd80 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.443 [2024-11-20 12:34:05.108771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fd00 len:0x10000 key:0x181a00 00:19:59.443 [2024-11-20 12:34:05.108788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.108811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fc80 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.108828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.108856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296fc00 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.108873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100295fb80 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.108914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.108937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100294fb00 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.108954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.108976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100293fa80 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100292fa00 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100291f980 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100290f900 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ff880 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ef800 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028df780 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028cf700 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028bf680 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028af600 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100289f580 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100288f500 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100287f480 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100286f400 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100285f380 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100284f300 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100283f280 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100282f200 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100281f180 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100280f100 len:0x10000 key:0x181a00 00:19:59.444 [2024-11-20 12:34:05.109774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bf0000 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.109814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.109854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcff00 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.109894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfe80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.109933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafe00 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.109973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.109996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fd80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fd00 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fc80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.444 [2024-11-20 12:34:05.110242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x184d00 00:19:59.444 [2024-11-20 12:34:05.110259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.110299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.110339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.110378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.110418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x184c00 00:19:59.445 [2024-11-20 12:34:05.110458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f877000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a828000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a807000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008830000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f370000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f76f000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a930000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b635000 len:0x10000 key:0x183a00 00:19:59.445 [2024-11-20 12:34:05.110863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113720] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:59.445 [2024-11-20 12:34:05.113755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.113961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.113977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184d00 00:19:59.445 [2024-11-20 12:34:05.114302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x183c00 00:19:59.445 [2024-11-20 12:34:05.114341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x183c00 00:19:59.445 [2024-11-20 12:34:05.114381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.445 [2024-11-20 12:34:05.114404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.114962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.114985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x183c00 00:19:59.446 [2024-11-20 12:34:05.115583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.446 [2024-11-20 12:34:05.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x184200 00:19:59.446 [2024-11-20 12:34:05.115865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.115906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.115929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.115946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.115969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.115986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.116307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.116331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x184d00 00:19:59.447 [2024-11-20 12:34:05.116348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119214] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:59.447 [2024-11-20 12:34:05.119249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184200 00:19:59.447 [2024-11-20 12:34:05.119738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.119975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.119997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.120036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.120076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.120120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.120160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.447 [2024-11-20 12:34:05.120199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184b00 00:19:59.447 [2024-11-20 12:34:05.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.120965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184b00 00:19:59.448 [2024-11-20 12:34:05.120982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.448 [2024-11-20 12:34:05.121672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184500 00:19:59.448 [2024-11-20 12:34:05.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.449 [2024-11-20 12:34:05.121712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184500 00:19:59.449 [2024-11-20 12:34:05.121729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.449 [2024-11-20 12:34:05.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184500 00:19:59.449 [2024-11-20 12:34:05.121769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.449 [2024-11-20 12:34:05.121792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184500 00:19:59.449 [2024-11-20 12:34:05.121809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.449 [2024-11-20 12:34:05.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184200 00:19:59.449 [2024-11-20 12:34:05.121849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f5ca8000 sqhd:8250 p:0 m:0 dnr:0 00:19:59.449 [2024-11-20 12:34:05.149215] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149321] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149351] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149380] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149401] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149423] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149445] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149467] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149497] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149520] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.149541] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:59.449 [2024-11-20 12:34:05.156188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.156226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.156247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.156266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:59.449 task offset: 44032 on job bdev=Nvme1n1 fails 00:19:59.449 00:19:59.449 Latency(us) 00:19:59.449 [2024-11-20T11:34:05.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.449 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme1n1 ended in about 2.54 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme1n1 : 2.54 125.98 7.87 25.20 0.00 419468.33 41748.86 1068770.80 00:19:59.449 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme2n1 ended in about 2.54 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme2n1 : 2.54 125.92 7.87 25.18 0.00 415740.21 46409.20 1068770.80 00:19:59.449 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme3n1 ended in about 2.54 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme3n1 : 2.54 135.30 8.46 25.17 0.00 387845.60 5364.24 1062557.01 00:19:59.449 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme4n1 ended in about 2.54 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme4n1 : 2.54 134.85 8.43 25.16 0.00 385414.06 13592.65 1062557.01 00:19:59.449 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme5n1 ended in about 2.48 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme5n1 : 2.48 128.81 8.05 25.76 0.00 396278.64 19903.53 1168191.34 00:19:59.449 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme6n1 ended in about 2.49 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme6n1 : 2.49 128.51 8.03 25.70 0.00 393470.99 24563.86 1155763.77 00:19:59.449 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme7n1 ended in about 2.50 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme7n1 : 2.50 128.22 8.01 25.64 0.00 390655.75 27379.48 1137122.42 00:19:59.449 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme8n1 ended in about 2.50 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme8n1 : 2.50 127.93 8.00 25.59 0.00 387749.36 30486.38 1124694.85 00:19:59.449 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme9n1 ended in about 2.51 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme9n1 : 2.51 127.65 7.98 25.53 0.00 384824.76 66798.17 1112267.28 00:19:59.449 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.449 Job: Nvme10n1 ended in about 2.51 seconds with error 00:19:59.449 Verification LBA range: start 0x0 length 0x400 00:19:59.449 Nvme10n1 : 2.51 101.90 6.37 25.48 0.00 458088.75 67574.90 1093625.93 00:19:59.449 [2024-11-20T11:34:05.215Z] =================================================================================================================== 00:19:59.449 [2024-11-20T11:34:05.215Z] Total : 1265.08 79.07 254.41 0.00 400825.83 5364.24 1168191.34 00:19:59.449 [2024-11-20 12:34:05.185643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:59.449 [2024-11-20 12:34:05.187171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.187209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.187231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.187250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.187269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:19:59.449 [2024-11-20 12:34:05.187288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:59.711 [2024-11-20 12:34:05.206846] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.206879] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.206895] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:19:59.711 [2024-11-20 12:34:05.207071] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.207131] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.207157] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e4280 00:19:59.711 [2024-11-20 12:34:05.207340] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.207376] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.207389] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d7000 00:19:59.711 [2024-11-20 12:34:05.207525] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.207583] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.207598] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ad8c0 00:19:59.711 [2024-11-20 12:34:05.207799] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.207822] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.207844] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016881100 00:19:59.711 [2024-11-20 12:34:05.207976] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.208011] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.208023] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688e380 00:19:59.711 [2024-11-20 12:34:05.208148] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.208170] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.208182] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689b2c0 00:19:59.711 [2024-11-20 12:34:05.208311] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.208334] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.208347] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168b41c0 00:19:59.711 [2024-11-20 12:34:05.208443] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.208465] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.208486] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168b0800 00:19:59.711 [2024-11-20 12:34:05.208600] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:59.711 [2024-11-20 12:34:05.208634] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:59.711 [2024-11-20 12:34:05.208646] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168cf040 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2797575 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2797575 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.711 12:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2797575 00:20:00.654 [2024-11-20 12:34:06.211121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.211158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.213184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.213220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.214719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.214751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.216270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.216304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.218022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.218056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.219574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.219607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.221219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.221243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.222747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.222772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:00.654 [2024-11-20 12:34:06.224896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.654 [2024-11-20 12:34:06.224923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:00.655 [2024-11-20 12:34:06.227577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:00.655 [2024-11-20 12:34:06.227606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:00.655 [2024-11-20 12:34:06.227623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.227637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.227654] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.227683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.227709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.227723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.227737] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.227752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.227771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.227785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.227799] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.227814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.227832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.227852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.227866] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.227881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228063] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228126] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228186] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228247] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228308] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:00.655 [2024-11-20 12:34:06.228340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:00.655 [2024-11-20 12:34:06.228354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:00.655 [2024-11-20 12:34:06.228368] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:20:00.655 [2024-11-20 12:34:06.228382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:00.655 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:00.916 rmmod nvme_rdma 00:20:00.916 rmmod nvme_fabrics 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2797425 ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2797425 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2797425 ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2797425 00:20:00.916 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2797425) - No such process 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2797425 is not found' 00:20:00.916 Process with pid 2797425 is not found 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:00.916 00:20:00.916 real 0m6.261s 00:20:00.916 user 0m19.527s 00:20:00.916 sys 0m1.190s 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.916 ************************************ 00:20:00.916 END TEST nvmf_shutdown_tc3 00:20:00.916 ************************************ 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:00.916 ************************************ 00:20:00.916 START TEST nvmf_shutdown_tc4 00:20:00.916 ************************************ 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:00.916 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:00.916 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:00.917 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:00.917 Found net devices under 0000:83:00.0: mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:00.917 Found net devices under 0000:83:00.1: mlx_0_1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:00.917 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.917 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:00.917 altname enp131s0f0np0 00:20:00.917 inet 192.168.100.8/24 scope global mlx_0_0 00:20:00.917 valid_lft forever preferred_lft forever 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:00.917 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.917 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:00.917 altname enp131s0f1np1 00:20:00.917 inet 192.168.100.9/24 scope global mlx_0_1 00:20:00.917 valid_lft forever preferred_lft forever 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.917 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:00.918 192.168.100.9' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:00.918 192.168.100.9' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:00.918 192.168.100.9' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:00.918 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2798165 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2798165 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2798165 ']' 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.177 12:34:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 [2024-11-20 12:34:06.758405] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:01.177 [2024-11-20 12:34:06.758504] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.177 [2024-11-20 12:34:06.877914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.436 [2024-11-20 12:34:06.988644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.436 [2024-11-20 12:34:06.988746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.436 [2024-11-20 12:34:06.988780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.436 [2024-11-20 12:34:06.988810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.436 [2024-11-20 12:34:06.988835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.436 [2024-11-20 12:34:06.991228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.436 [2024-11-20 12:34:06.991283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.436 [2024-11-20 12:34:06.991361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:01.436 [2024-11-20 12:34:06.991369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.436 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.436 [2024-11-20 12:34:07.152351] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23030d0/0x23075c0) succeed. 00:20:01.436 [2024-11-20 12:34:07.167374] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2304760/0x2348c60) succeed. 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.696 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:01.696 Malloc1 00:20:01.696 [2024-11-20 12:34:07.425282] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:01.696 Malloc2 00:20:01.956 Malloc3 00:20:01.956 Malloc4 00:20:01.956 Malloc5 00:20:01.956 Malloc6 00:20:01.956 Malloc7 00:20:02.217 Malloc8 00:20:02.217 Malloc9 00:20:02.217 Malloc10 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2798309 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:20:02.217 12:34:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:02.477 [2024-11-20 12:34:07.981490] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:07.745 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.745 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2798165 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2798165 ']' 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2798165 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2798165 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2798165' 00:20:07.746 killing process with pid 2798165 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2798165 00:20:07.746 12:34:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2798165 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 starting I/O failed: -6 00:20:07.746 starting I/O failed: -6 00:20:07.746 starting I/O failed: -6 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:07.746 NVMe io qpair process completion error 00:20:08.004 12:34:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 starting I/O failed: -6 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.574 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 [2024-11-20 12:34:14.067097] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 [2024-11-20 12:34:14.079034] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.575 starting I/O failed: -6 00:20:08.575 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 starting I/O failed: -6 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 [2024-11-20 12:34:14.094468] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.576 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 [2024-11-20 12:34:14.105490] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 starting I/O failed: -6 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.577 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 [2024-11-20 12:34:14.116909] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 [2024-11-20 12:34:14.129449] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.578 starting I/O failed: -6 00:20:08.578 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 [2024-11-20 12:34:14.142269] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.579 starting I/O failed: -6 00:20:08.579 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 starting I/O failed: -6 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 Write completed with error (sct=0, sc=8) 00:20:08.580 [2024-11-20 12:34:14.154223] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:20:08.580 NVMe io qpair process completion error 00:20:08.580 NVMe io qpair process completion error 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2798309 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2798309 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.840 12:34:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2798309 00:20:09.413 [2024-11-20 12:34:15.156960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.413 [2024-11-20 12:34:15.156998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 Write completed with error (sct=0, sc=8) 00:20:09.413 [2024-11-20 12:34:15.159571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 [2024-11-20 12:34:15.159646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.162497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 [2024-11-20 12:34:15.162575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.165556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.165590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.168458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 [2024-11-20 12:34:15.168543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.170784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 [2024-11-20 12:34:15.170843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 [2024-11-20 12:34:15.173164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.414 [2024-11-20 12:34:15.173221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.414 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 Write completed with error (sct=0, sc=8) 00:20:09.675 [2024-11-20 12:34:15.182557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.675 [2024-11-20 12:34:15.182622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:09.675 [2024-11-20 12:34:15.184690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.675 [2024-11-20 12:34:15.184745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:09.675 [2024-11-20 12:34:15.237434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:09.675 [2024-11-20 12:34:15.237581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:09.675 Initializing NVMe Controllers 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:20:09.675 Controller IO queue size 128, less than required. 00:20:09.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:20:09.675 Controller IO queue size 128, less than required. 00:20:09.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.675 Controller IO queue size 128, less than required. 00:20:09.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:20:09.675 Controller IO queue size 128, less than required. 00:20:09.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:20:09.675 Controller IO queue size 128, less than required. 00:20:09.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.675 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:20:09.676 Controller IO queue size 128, less than required. 00:20:09.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.676 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:20:09.676 Controller IO queue size 128, less than required. 00:20:09.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.676 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:20:09.676 Controller IO queue size 128, less than required. 00:20:09.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.676 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:20:09.676 Controller IO queue size 128, less than required. 00:20:09.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.676 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:20:09.676 Controller IO queue size 128, less than required. 00:20:09.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:09.676 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:09.676 Initialization complete. Launching workers. 00:20:09.676 ======================================================== 00:20:09.676 Latency(us) 00:20:09.676 Device Information : IOPS MiB/s Average min max 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1265.59 54.38 99496.42 49395.29 1244846.37 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1268.60 54.51 99356.80 47338.04 1246734.37 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1260.74 54.17 100118.02 45674.23 1285830.55 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1303.51 56.01 96583.18 187.12 1255752.30 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1260.24 54.15 100417.93 37326.82 1313850.51 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1282.46 55.11 98780.38 206.87 1294282.14 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1300.67 55.89 114366.98 172.19 2245322.43 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1301.51 55.92 114412.00 192.23 2154716.95 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1258.74 54.09 100611.64 20756.24 1306820.12 00:20:09.676 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1264.92 54.35 100240.75 3436.34 1323871.98 00:20:09.676 ======================================================== 00:20:09.676 Total : 12766.98 548.58 102484.74 172.19 2245322.43 00:20:09.676 00:20:09.676 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:09.676 rmmod nvme_rdma 00:20:09.676 rmmod nvme_fabrics 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2798165 ']' 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2798165 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2798165 ']' 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2798165 00:20:09.676 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2798165) - No such process 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2798165 is not found' 00:20:09.676 Process with pid 2798165 is not found 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.676 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:09.676 00:20:09.676 real 0m8.808s 00:20:09.676 user 0m32.335s 00:20:09.676 sys 0m1.195s 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:09.677 ************************************ 00:20:09.677 END TEST nvmf_shutdown_tc4 00:20:09.677 ************************************ 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:09.677 00:20:09.677 real 0m30.188s 00:20:09.677 user 1m44.443s 00:20:09.677 sys 0m6.477s 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:09.677 ************************************ 00:20:09.677 END TEST nvmf_shutdown 00:20:09.677 ************************************ 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:09.677 ************************************ 00:20:09.677 START TEST nvmf_nsid 00:20:09.677 ************************************ 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:20:09.677 * Looking for test storage... 00:20:09.677 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:09.677 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.937 --rc genhtml_branch_coverage=1 00:20:09.937 --rc genhtml_function_coverage=1 00:20:09.937 --rc genhtml_legend=1 00:20:09.937 --rc geninfo_all_blocks=1 00:20:09.937 --rc geninfo_unexecuted_blocks=1 00:20:09.937 00:20:09.937 ' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.937 --rc genhtml_branch_coverage=1 00:20:09.937 --rc genhtml_function_coverage=1 00:20:09.937 --rc genhtml_legend=1 00:20:09.937 --rc geninfo_all_blocks=1 00:20:09.937 --rc geninfo_unexecuted_blocks=1 00:20:09.937 00:20:09.937 ' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.937 --rc genhtml_branch_coverage=1 00:20:09.937 --rc genhtml_function_coverage=1 00:20:09.937 --rc genhtml_legend=1 00:20:09.937 --rc geninfo_all_blocks=1 00:20:09.937 --rc geninfo_unexecuted_blocks=1 00:20:09.937 00:20:09.937 ' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.937 --rc genhtml_branch_coverage=1 00:20:09.937 --rc genhtml_function_coverage=1 00:20:09.937 --rc genhtml_legend=1 00:20:09.937 --rc geninfo_all_blocks=1 00:20:09.937 --rc geninfo_unexecuted_blocks=1 00:20:09.937 00:20:09.937 ' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.937 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:09.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:09.938 12:34:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:12.479 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.479 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:12.479 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:12.480 Found net devices under 0000:83:00.0: mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:12.480 Found net devices under 0000:83:00.1: mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:12.480 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.480 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:12.480 altname enp131s0f0np0 00:20:12.480 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.480 valid_lft forever preferred_lft forever 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:12.480 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.480 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:12.480 altname enp131s0f1np1 00:20:12.480 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.480 valid_lft forever preferred_lft forever 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:12.480 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.481 192.168.100.9' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:12.481 192.168.100.9' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:12.481 192.168.100.9' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2800290 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2800290 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2800290 ']' 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.481 12:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.481 [2024-11-20 12:34:17.994646] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:12.481 [2024-11-20 12:34:17.994736] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.481 [2024-11-20 12:34:18.065323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.481 [2024-11-20 12:34:18.126359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.481 [2024-11-20 12:34:18.126426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.481 [2024-11-20 12:34:18.126443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.481 [2024-11-20 12:34:18.126456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.481 [2024-11-20 12:34:18.126467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.481 [2024-11-20 12:34:18.126975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2800309 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=eb8d3eea-1550-43fe-9108-248515dbddc6 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9b061556-1fd0-4a0e-a25b-eab0c8faa4f1 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=842c3c0d-2a62-40a7-bcc4-2232f25ff0f8 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 null0 00:20:12.740 null1 00:20:12.740 null2 00:20:12.740 [2024-11-20 12:34:18.371260] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:12.740 [2024-11-20 12:34:18.371339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800309 ] 00:20:12.740 [2024-11-20 12:34:18.390404] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23a4ab0/0x2318cc0) succeed. 00:20:12.740 [2024-11-20 12:34:18.403839] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23a5f60/0x2283c70) succeed. 00:20:12.740 [2024-11-20 12:34:18.444148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.740 [2024-11-20 12:34:18.466778] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2800309 /var/tmp/tgt2.sock 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2800309 ']' 00:20:12.740 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:12.999 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.999 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:12.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:12.999 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.999 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:12.999 [2024-11-20 12:34:18.508517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.257 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.257 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:13.257 12:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:13.823 [2024-11-20 12:34:19.345539] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa64700/0xa749d0) succeed. 00:20:13.823 [2024-11-20 12:34:19.361043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc49a90/0xab6070) succeed. 00:20:13.823 [2024-11-20 12:34:19.416967] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:13.823 nvme0n1 nvme0n2 00:20:13.823 nvme1n1 00:20:13.823 12:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:13.823 12:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:13.823 12:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid eb8d3eea-1550-43fe-9108-248515dbddc6 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=eb8d3eea155043fe9108248515dbddc6 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EB8D3EEA155043FE9108248515DBDDC6 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EB8D3EEA155043FE9108248515DBDDC6 == \E\B\8\D\3\E\E\A\1\5\5\0\4\3\F\E\9\1\0\8\2\4\8\5\1\5\D\B\D\D\C\6 ]] 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:15.197 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9b061556-1fd0-4a0e-a25b-eab0c8faa4f1 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b0615561fd04a0ea25beab0c8faa4f1 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B0615561FD04A0EA25BEAB0C8FAA4F1 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9B0615561FD04A0EA25BEAB0C8FAA4F1 == \9\B\0\6\1\5\5\6\1\F\D\0\4\A\0\E\A\2\5\B\E\A\B\0\C\8\F\A\A\4\F\1 ]] 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 842c3c0d-2a62-40a7-bcc4-2232f25ff0f8 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=842c3c0d2a6240a7bcc42232f25ff0f8 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 842C3C0D2A6240A7BCC42232F25FF0F8 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 842C3C0D2A6240A7BCC42232F25FF0F8 == \8\4\2\C\3\C\0\D\2\A\6\2\4\0\A\7\B\C\C\4\2\2\3\2\F\2\5\F\F\0\F\8 ]] 00:20:15.198 12:34:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2800309 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2800309 ']' 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2800309 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800309 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800309' 00:20:16.134 killing process with pid 2800309 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2800309 00:20:16.134 12:34:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2800309 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:16.703 rmmod nvme_rdma 00:20:16.703 rmmod nvme_fabrics 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2800290 ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2800290 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2800290 ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2800290 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800290 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800290' 00:20:16.703 killing process with pid 2800290 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2800290 00:20:16.703 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2800290 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:16.962 00:20:16.962 real 0m7.205s 00:20:16.962 user 0m9.881s 00:20:16.962 sys 0m2.601s 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:16.962 ************************************ 00:20:16.962 END TEST nvmf_nsid 00:20:16.962 ************************************ 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:16.962 00:20:16.962 real 8m30.756s 00:20:16.962 user 22m29.191s 00:20:16.962 sys 1m17.049s 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.962 12:34:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.962 ************************************ 00:20:16.962 END TEST nvmf_target_extra 00:20:16.962 ************************************ 00:20:16.962 12:34:22 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:20:16.962 12:34:22 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.962 12:34:22 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.962 12:34:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:16.962 ************************************ 00:20:16.962 START TEST nvmf_host 00:20:16.962 ************************************ 00:20:16.962 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:20:16.962 * Looking for test storage... 00:20:16.962 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:20:16.962 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.962 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.962 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.221 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.222 --rc genhtml_branch_coverage=1 00:20:17.222 --rc genhtml_function_coverage=1 00:20:17.222 --rc genhtml_legend=1 00:20:17.222 --rc geninfo_all_blocks=1 00:20:17.222 --rc geninfo_unexecuted_blocks=1 00:20:17.222 00:20:17.222 ' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.222 --rc genhtml_branch_coverage=1 00:20:17.222 --rc genhtml_function_coverage=1 00:20:17.222 --rc genhtml_legend=1 00:20:17.222 --rc geninfo_all_blocks=1 00:20:17.222 --rc geninfo_unexecuted_blocks=1 00:20:17.222 00:20:17.222 ' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.222 --rc genhtml_branch_coverage=1 00:20:17.222 --rc genhtml_function_coverage=1 00:20:17.222 --rc genhtml_legend=1 00:20:17.222 --rc geninfo_all_blocks=1 00:20:17.222 --rc geninfo_unexecuted_blocks=1 00:20:17.222 00:20:17.222 ' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.222 --rc genhtml_branch_coverage=1 00:20:17.222 --rc genhtml_function_coverage=1 00:20:17.222 --rc genhtml_legend=1 00:20:17.222 --rc geninfo_all_blocks=1 00:20:17.222 --rc geninfo_unexecuted_blocks=1 00:20:17.222 00:20:17.222 ' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.222 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.222 ************************************ 00:20:17.222 START TEST nvmf_multicontroller 00:20:17.222 ************************************ 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:17.222 * Looking for test storage... 00:20:17.222 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:17.222 12:34:22 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.482 --rc genhtml_branch_coverage=1 00:20:17.482 --rc genhtml_function_coverage=1 00:20:17.482 --rc genhtml_legend=1 00:20:17.482 --rc geninfo_all_blocks=1 00:20:17.482 --rc geninfo_unexecuted_blocks=1 00:20:17.482 00:20:17.482 ' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.482 --rc genhtml_branch_coverage=1 00:20:17.482 --rc genhtml_function_coverage=1 00:20:17.482 --rc genhtml_legend=1 00:20:17.482 --rc geninfo_all_blocks=1 00:20:17.482 --rc geninfo_unexecuted_blocks=1 00:20:17.482 00:20:17.482 ' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.482 --rc genhtml_branch_coverage=1 00:20:17.482 --rc genhtml_function_coverage=1 00:20:17.482 --rc genhtml_legend=1 00:20:17.482 --rc geninfo_all_blocks=1 00:20:17.482 --rc geninfo_unexecuted_blocks=1 00:20:17.482 00:20:17.482 ' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.482 --rc genhtml_branch_coverage=1 00:20:17.482 --rc genhtml_function_coverage=1 00:20:17.482 --rc genhtml_legend=1 00:20:17.482 --rc geninfo_all_blocks=1 00:20:17.482 --rc geninfo_unexecuted_blocks=1 00:20:17.482 00:20:17.482 ' 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.482 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.483 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:17.483 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:20:17.483 00:20:17.483 real 0m0.223s 00:20:17.483 user 0m0.156s 00:20:17.483 sys 0m0.077s 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:17.483 ************************************ 00:20:17.483 END TEST nvmf_multicontroller 00:20:17.483 ************************************ 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.483 ************************************ 00:20:17.483 START TEST nvmf_aer 00:20:17.483 ************************************ 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:17.483 * Looking for test storage... 00:20:17.483 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.483 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.484 --rc genhtml_branch_coverage=1 00:20:17.484 --rc genhtml_function_coverage=1 00:20:17.484 --rc genhtml_legend=1 00:20:17.484 --rc geninfo_all_blocks=1 00:20:17.484 --rc geninfo_unexecuted_blocks=1 00:20:17.484 00:20:17.484 ' 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.484 --rc genhtml_branch_coverage=1 00:20:17.484 --rc genhtml_function_coverage=1 00:20:17.484 --rc genhtml_legend=1 00:20:17.484 --rc geninfo_all_blocks=1 00:20:17.484 --rc geninfo_unexecuted_blocks=1 00:20:17.484 00:20:17.484 ' 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.484 --rc genhtml_branch_coverage=1 00:20:17.484 --rc genhtml_function_coverage=1 00:20:17.484 --rc genhtml_legend=1 00:20:17.484 --rc geninfo_all_blocks=1 00:20:17.484 --rc geninfo_unexecuted_blocks=1 00:20:17.484 00:20:17.484 ' 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.484 --rc genhtml_branch_coverage=1 00:20:17.484 --rc genhtml_function_coverage=1 00:20:17.484 --rc genhtml_legend=1 00:20:17.484 --rc geninfo_all_blocks=1 00:20:17.484 --rc geninfo_unexecuted_blocks=1 00:20:17.484 00:20:17.484 ' 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.484 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:17.743 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.744 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.744 12:34:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:19.651 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:19.651 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:19.651 Found net devices under 0000:83:00.0: mlx_0_0 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:19.651 Found net devices under 0000:83:00.1: mlx_0_1 00:20:19.651 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:19.652 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:19.911 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:19.911 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:19.911 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:19.911 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:19.911 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:19.912 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:19.912 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:19.912 altname enp131s0f0np0 00:20:19.912 inet 192.168.100.8/24 scope global mlx_0_0 00:20:19.912 valid_lft forever preferred_lft forever 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:19.912 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:19.912 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:19.912 altname enp131s0f1np1 00:20:19.912 inet 192.168.100.9/24 scope global mlx_0_1 00:20:19.912 valid_lft forever preferred_lft forever 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:19.912 192.168.100.9' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:19.912 192.168.100.9' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:19.912 192.168.100.9' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2802230 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2802230 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2802230 ']' 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.912 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 [2024-11-20 12:34:25.602002] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:19.912 [2024-11-20 12:34:25.602093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.912 [2024-11-20 12:34:25.673249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.172 [2024-11-20 12:34:25.737388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.172 [2024-11-20 12:34:25.737448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.172 [2024-11-20 12:34:25.737463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.172 [2024-11-20 12:34:25.737489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.172 [2024-11-20 12:34:25.737504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.172 [2024-11-20 12:34:25.738811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.172 [2024-11-20 12:34:25.738932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.172 [2024-11-20 12:34:25.739002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.172 [2024-11-20 12:34:25.739006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.172 12:34:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 [2024-11-20 12:34:25.955394] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16b9df0/0x16be2e0) succeed. 00:20:20.432 [2024-11-20 12:34:25.970979] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16bb480/0x16ff980) succeed. 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 Malloc0 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 [2024-11-20 12:34:26.187067] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.690 [ 00:20:20.691 { 00:20:20.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.691 "subtype": "Discovery", 00:20:20.691 "listen_addresses": [], 00:20:20.691 "allow_any_host": true, 00:20:20.691 "hosts": [] 00:20:20.691 }, 00:20:20.691 { 00:20:20.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.691 "subtype": "NVMe", 00:20:20.691 "listen_addresses": [ 00:20:20.691 { 00:20:20.691 "trtype": "RDMA", 00:20:20.691 "adrfam": "IPv4", 00:20:20.691 "traddr": "192.168.100.8", 00:20:20.691 "trsvcid": "4420" 00:20:20.691 } 00:20:20.691 ], 00:20:20.691 "allow_any_host": true, 00:20:20.691 "hosts": [], 00:20:20.691 "serial_number": "SPDK00000000000001", 00:20:20.691 "model_number": "SPDK bdev Controller", 00:20:20.691 "max_namespaces": 2, 00:20:20.691 "min_cntlid": 1, 00:20:20.691 "max_cntlid": 65519, 00:20:20.691 "namespaces": [ 00:20:20.691 { 00:20:20.691 "nsid": 1, 00:20:20.691 "bdev_name": "Malloc0", 00:20:20.691 "name": "Malloc0", 00:20:20.691 "nguid": "A7FEED83444B4F488B8B2969D7258D6F", 00:20:20.691 "uuid": "a7feed83-444b-4f48-8b8b-2969d7258d6f" 00:20:20.691 } 00:20:20.691 ] 00:20:20.691 } 00:20:20.691 ] 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2802263 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.691 Malloc1 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.691 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 [ 00:20:20.964 { 00:20:20.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.964 "subtype": "Discovery", 00:20:20.964 "listen_addresses": [], 00:20:20.964 "allow_any_host": true, 00:20:20.964 "hosts": [] 00:20:20.964 }, 00:20:20.964 { 00:20:20.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.964 "subtype": "NVMe", 00:20:20.964 "listen_addresses": [ 00:20:20.964 { 00:20:20.964 "trtype": "RDMA", 00:20:20.964 "adrfam": "IPv4", 00:20:20.964 "traddr": "192.168.100.8", 00:20:20.964 "trsvcid": "4420" 00:20:20.964 } 00:20:20.964 ], 00:20:20.964 "allow_any_host": true, 00:20:20.964 "hosts": [], 00:20:20.964 "serial_number": "SPDK00000000000001", 00:20:20.964 "model_number": "SPDK bdev Controller", 00:20:20.964 "max_namespaces": 2, 00:20:20.964 "min_cntlid": 1, 00:20:20.964 "max_cntlid": 65519, 00:20:20.964 "namespaces": [ 00:20:20.964 { 00:20:20.964 "nsid": 1, 00:20:20.964 "bdev_name": "Malloc0", 00:20:20.964 "name": "Malloc0", 00:20:20.964 "nguid": "A7FEED83444B4F488B8B2969D7258D6F", 00:20:20.964 "uuid": "a7feed83-444b-4f48-8b8b-2969d7258d6f" 00:20:20.964 }, 00:20:20.964 { 00:20:20.964 "nsid": 2, 00:20:20.964 "bdev_name": "Malloc1", 00:20:20.964 "name": "Malloc1", 00:20:20.964 "nguid": "51E092B7A35E4D0EBB43BD5468857241", 00:20:20.964 "uuid": "51e092b7-a35e-4d0e-bb43-bd5468857241" 00:20:20.964 } 00:20:20.964 ] 00:20:20.964 } 00:20:20.964 ] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2802263 00:20:20.964 Asynchronous Event Request test 00:20:20.964 Attaching to 192.168.100.8 00:20:20.964 Attached to 192.168.100.8 00:20:20.964 Registering asynchronous event callbacks... 00:20:20.964 Starting namespace attribute notice tests for all controllers... 00:20:20.964 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.964 aer_cb - Changed Namespace 00:20:20.964 Cleaning up... 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:20.964 rmmod nvme_rdma 00:20:20.964 rmmod nvme_fabrics 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.964 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2802230 ']' 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2802230 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2802230 ']' 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2802230 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802230 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802230' 00:20:20.965 killing process with pid 2802230 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2802230 00:20:20.965 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2802230 00:20:21.225 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.225 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:21.225 00:20:21.225 real 0m3.888s 00:20:21.225 user 0m5.496s 00:20:21.225 sys 0m2.189s 00:20:21.225 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.225 12:34:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.225 ************************************ 00:20:21.225 END TEST nvmf_aer 00:20:21.225 ************************************ 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.486 ************************************ 00:20:21.486 START TEST nvmf_async_init 00:20:21.486 ************************************ 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:21.486 * Looking for test storage... 00:20:21.486 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.486 --rc genhtml_branch_coverage=1 00:20:21.486 --rc genhtml_function_coverage=1 00:20:21.486 --rc genhtml_legend=1 00:20:21.486 --rc geninfo_all_blocks=1 00:20:21.486 --rc geninfo_unexecuted_blocks=1 00:20:21.486 00:20:21.486 ' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.486 --rc genhtml_branch_coverage=1 00:20:21.486 --rc genhtml_function_coverage=1 00:20:21.486 --rc genhtml_legend=1 00:20:21.486 --rc geninfo_all_blocks=1 00:20:21.486 --rc geninfo_unexecuted_blocks=1 00:20:21.486 00:20:21.486 ' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.486 --rc genhtml_branch_coverage=1 00:20:21.486 --rc genhtml_function_coverage=1 00:20:21.486 --rc genhtml_legend=1 00:20:21.486 --rc geninfo_all_blocks=1 00:20:21.486 --rc geninfo_unexecuted_blocks=1 00:20:21.486 00:20:21.486 ' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.486 --rc genhtml_branch_coverage=1 00:20:21.486 --rc genhtml_function_coverage=1 00:20:21.486 --rc genhtml_legend=1 00:20:21.486 --rc geninfo_all_blocks=1 00:20:21.486 --rc geninfo_unexecuted_blocks=1 00:20:21.486 00:20:21.486 ' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.486 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8c89db93f02e4ae893f2c075efed7ba6 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.487 12:34:27 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:24.113 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:24.113 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:24.113 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:24.114 Found net devices under 0000:83:00.0: mlx_0_0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:24.114 Found net devices under 0000:83:00.1: mlx_0_1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:24.114 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.114 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:24.114 altname enp131s0f0np0 00:20:24.114 inet 192.168.100.8/24 scope global mlx_0_0 00:20:24.114 valid_lft forever preferred_lft forever 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:24.114 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.114 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:24.114 altname enp131s0f1np1 00:20:24.114 inet 192.168.100.9/24 scope global mlx_0_1 00:20:24.114 valid_lft forever preferred_lft forever 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:24.114 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:24.115 192.168.100.9' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:24.115 192.168.100.9' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:24.115 192.168.100.9' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2803654 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2803654 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2803654 ']' 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.115 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.115 [2024-11-20 12:34:29.677545] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:24.115 [2024-11-20 12:34:29.677662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.115 [2024-11-20 12:34:29.750589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.115 [2024-11-20 12:34:29.813486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.115 [2024-11-20 12:34:29.813551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.115 [2024-11-20 12:34:29.813567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.115 [2024-11-20 12:34:29.813579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.115 [2024-11-20 12:34:29.813598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.115 [2024-11-20 12:34:29.814120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.374 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.374 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:20:24.374 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.374 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.374 12:34:29 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.374 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.374 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:24.374 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.374 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.374 [2024-11-20 12:34:30.049239] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb5c9e0/0xb60ed0) succeed. 00:20:24.374 [2024-11-20 12:34:30.062777] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb5de90/0xba2570) succeed. 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 null0 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8c89db93f02e4ae893f2c075efed7ba6 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.375 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 [2024-11-20 12:34:30.140795] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 nvme0n1 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 [ 00:20:24.634 { 00:20:24.634 "name": "nvme0n1", 00:20:24.634 "aliases": [ 00:20:24.634 "8c89db93-f02e-4ae8-93f2-c075efed7ba6" 00:20:24.634 ], 00:20:24.634 "product_name": "NVMe disk", 00:20:24.634 "block_size": 512, 00:20:24.634 "num_blocks": 2097152, 00:20:24.634 "uuid": "8c89db93-f02e-4ae8-93f2-c075efed7ba6", 00:20:24.634 "numa_id": 1, 00:20:24.634 "assigned_rate_limits": { 00:20:24.634 "rw_ios_per_sec": 0, 00:20:24.634 "rw_mbytes_per_sec": 0, 00:20:24.634 "r_mbytes_per_sec": 0, 00:20:24.634 "w_mbytes_per_sec": 0 00:20:24.634 }, 00:20:24.634 "claimed": false, 00:20:24.634 "zoned": false, 00:20:24.634 "supported_io_types": { 00:20:24.634 "read": true, 00:20:24.634 "write": true, 00:20:24.634 "unmap": false, 00:20:24.634 "flush": true, 00:20:24.634 "reset": true, 00:20:24.634 "nvme_admin": true, 00:20:24.634 "nvme_io": true, 00:20:24.634 "nvme_io_md": false, 00:20:24.634 "write_zeroes": true, 00:20:24.634 "zcopy": false, 00:20:24.634 "get_zone_info": false, 00:20:24.634 "zone_management": false, 00:20:24.634 "zone_append": false, 00:20:24.634 "compare": true, 00:20:24.634 "compare_and_write": true, 00:20:24.634 "abort": true, 00:20:24.634 "seek_hole": false, 00:20:24.634 "seek_data": false, 00:20:24.634 "copy": true, 00:20:24.634 "nvme_iov_md": false 00:20:24.634 }, 00:20:24.634 "memory_domains": [ 00:20:24.634 { 00:20:24.634 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:24.634 "dma_device_type": 0 00:20:24.634 } 00:20:24.634 ], 00:20:24.634 "driver_specific": { 00:20:24.634 "nvme": [ 00:20:24.634 { 00:20:24.634 "trid": { 00:20:24.634 "trtype": "RDMA", 00:20:24.634 "adrfam": "IPv4", 00:20:24.634 "traddr": "192.168.100.8", 00:20:24.634 "trsvcid": "4420", 00:20:24.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.634 }, 00:20:24.634 "ctrlr_data": { 00:20:24.634 "cntlid": 1, 00:20:24.634 "vendor_id": "0x8086", 00:20:24.634 "model_number": "SPDK bdev Controller", 00:20:24.634 "serial_number": "00000000000000000000", 00:20:24.634 "firmware_revision": "25.01", 00:20:24.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.634 "oacs": { 00:20:24.634 "security": 0, 00:20:24.634 "format": 0, 00:20:24.634 "firmware": 0, 00:20:24.634 "ns_manage": 0 00:20:24.634 }, 00:20:24.634 "multi_ctrlr": true, 00:20:24.634 "ana_reporting": false 00:20:24.634 }, 00:20:24.634 "vs": { 00:20:24.634 "nvme_version": "1.3" 00:20:24.634 }, 00:20:24.634 "ns_data": { 00:20:24.634 "id": 1, 00:20:24.634 "can_share": true 00:20:24.634 } 00:20:24.634 } 00:20:24.634 ], 00:20:24.634 "mp_policy": "active_passive" 00:20:24.634 } 00:20:24.634 } 00:20:24.634 ] 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 [2024-11-20 12:34:30.244496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:24.634 [2024-11-20 12:34:30.270919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:24.634 [2024-11-20 12:34:30.299666] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.634 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.634 [ 00:20:24.635 { 00:20:24.635 "name": "nvme0n1", 00:20:24.635 "aliases": [ 00:20:24.635 "8c89db93-f02e-4ae8-93f2-c075efed7ba6" 00:20:24.635 ], 00:20:24.635 "product_name": "NVMe disk", 00:20:24.635 "block_size": 512, 00:20:24.635 "num_blocks": 2097152, 00:20:24.635 "uuid": "8c89db93-f02e-4ae8-93f2-c075efed7ba6", 00:20:24.635 "numa_id": 1, 00:20:24.635 "assigned_rate_limits": { 00:20:24.635 "rw_ios_per_sec": 0, 00:20:24.635 "rw_mbytes_per_sec": 0, 00:20:24.635 "r_mbytes_per_sec": 0, 00:20:24.635 "w_mbytes_per_sec": 0 00:20:24.635 }, 00:20:24.635 "claimed": false, 00:20:24.635 "zoned": false, 00:20:24.635 "supported_io_types": { 00:20:24.635 "read": true, 00:20:24.635 "write": true, 00:20:24.635 "unmap": false, 00:20:24.635 "flush": true, 00:20:24.635 "reset": true, 00:20:24.635 "nvme_admin": true, 00:20:24.635 "nvme_io": true, 00:20:24.635 "nvme_io_md": false, 00:20:24.635 "write_zeroes": true, 00:20:24.635 "zcopy": false, 00:20:24.635 "get_zone_info": false, 00:20:24.635 "zone_management": false, 00:20:24.635 "zone_append": false, 00:20:24.635 "compare": true, 00:20:24.635 "compare_and_write": true, 00:20:24.635 "abort": true, 00:20:24.635 "seek_hole": false, 00:20:24.635 "seek_data": false, 00:20:24.635 "copy": true, 00:20:24.635 "nvme_iov_md": false 00:20:24.635 }, 00:20:24.635 "memory_domains": [ 00:20:24.635 { 00:20:24.635 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:24.635 "dma_device_type": 0 00:20:24.635 } 00:20:24.635 ], 00:20:24.635 "driver_specific": { 00:20:24.635 "nvme": [ 00:20:24.635 { 00:20:24.635 "trid": { 00:20:24.635 "trtype": "RDMA", 00:20:24.635 "adrfam": "IPv4", 00:20:24.635 "traddr": "192.168.100.8", 00:20:24.635 "trsvcid": "4420", 00:20:24.635 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.635 }, 00:20:24.635 "ctrlr_data": { 00:20:24.635 "cntlid": 2, 00:20:24.635 "vendor_id": "0x8086", 00:20:24.635 "model_number": "SPDK bdev Controller", 00:20:24.635 "serial_number": "00000000000000000000", 00:20:24.635 "firmware_revision": "25.01", 00:20:24.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.635 "oacs": { 00:20:24.635 "security": 0, 00:20:24.635 "format": 0, 00:20:24.635 "firmware": 0, 00:20:24.635 "ns_manage": 0 00:20:24.635 }, 00:20:24.635 "multi_ctrlr": true, 00:20:24.635 "ana_reporting": false 00:20:24.635 }, 00:20:24.635 "vs": { 00:20:24.635 "nvme_version": "1.3" 00:20:24.635 }, 00:20:24.635 "ns_data": { 00:20:24.635 "id": 1, 00:20:24.635 "can_share": true 00:20:24.635 } 00:20:24.635 } 00:20:24.635 ], 00:20:24.635 "mp_policy": "active_passive" 00:20:24.635 } 00:20:24.635 } 00:20:24.635 ] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4CV8buZsF8 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4CV8buZsF8 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4CV8buZsF8 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.635 [2024-11-20 12:34:30.383238] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.635 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.894 [2024-11-20 12:34:30.399284] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.894 nvme0n1 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.894 [ 00:20:24.894 { 00:20:24.894 "name": "nvme0n1", 00:20:24.894 "aliases": [ 00:20:24.894 "8c89db93-f02e-4ae8-93f2-c075efed7ba6" 00:20:24.894 ], 00:20:24.894 "product_name": "NVMe disk", 00:20:24.894 "block_size": 512, 00:20:24.894 "num_blocks": 2097152, 00:20:24.894 "uuid": "8c89db93-f02e-4ae8-93f2-c075efed7ba6", 00:20:24.894 "numa_id": 1, 00:20:24.894 "assigned_rate_limits": { 00:20:24.894 "rw_ios_per_sec": 0, 00:20:24.894 "rw_mbytes_per_sec": 0, 00:20:24.894 "r_mbytes_per_sec": 0, 00:20:24.894 "w_mbytes_per_sec": 0 00:20:24.894 }, 00:20:24.894 "claimed": false, 00:20:24.894 "zoned": false, 00:20:24.894 "supported_io_types": { 00:20:24.894 "read": true, 00:20:24.894 "write": true, 00:20:24.894 "unmap": false, 00:20:24.894 "flush": true, 00:20:24.894 "reset": true, 00:20:24.894 "nvme_admin": true, 00:20:24.894 "nvme_io": true, 00:20:24.894 "nvme_io_md": false, 00:20:24.894 "write_zeroes": true, 00:20:24.894 "zcopy": false, 00:20:24.894 "get_zone_info": false, 00:20:24.894 "zone_management": false, 00:20:24.894 "zone_append": false, 00:20:24.894 "compare": true, 00:20:24.894 "compare_and_write": true, 00:20:24.894 "abort": true, 00:20:24.894 "seek_hole": false, 00:20:24.894 "seek_data": false, 00:20:24.894 "copy": true, 00:20:24.894 "nvme_iov_md": false 00:20:24.894 }, 00:20:24.894 "memory_domains": [ 00:20:24.894 { 00:20:24.894 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:24.894 "dma_device_type": 0 00:20:24.894 } 00:20:24.894 ], 00:20:24.894 "driver_specific": { 00:20:24.894 "nvme": [ 00:20:24.894 { 00:20:24.894 "trid": { 00:20:24.894 "trtype": "RDMA", 00:20:24.894 "adrfam": "IPv4", 00:20:24.894 "traddr": "192.168.100.8", 00:20:24.894 "trsvcid": "4421", 00:20:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.894 }, 00:20:24.894 "ctrlr_data": { 00:20:24.894 "cntlid": 3, 00:20:24.894 "vendor_id": "0x8086", 00:20:24.894 "model_number": "SPDK bdev Controller", 00:20:24.894 "serial_number": "00000000000000000000", 00:20:24.894 "firmware_revision": "25.01", 00:20:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.894 "oacs": { 00:20:24.894 "security": 0, 00:20:24.894 "format": 0, 00:20:24.894 "firmware": 0, 00:20:24.894 "ns_manage": 0 00:20:24.894 }, 00:20:24.894 "multi_ctrlr": true, 00:20:24.894 "ana_reporting": false 00:20:24.894 }, 00:20:24.894 "vs": { 00:20:24.894 "nvme_version": "1.3" 00:20:24.894 }, 00:20:24.894 "ns_data": { 00:20:24.894 "id": 1, 00:20:24.894 "can_share": true 00:20:24.894 } 00:20:24.894 } 00:20:24.894 ], 00:20:24.894 "mp_policy": "active_passive" 00:20:24.894 } 00:20:24.894 } 00:20:24.894 ] 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4CV8buZsF8 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:24.894 rmmod nvme_rdma 00:20:24.894 rmmod nvme_fabrics 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2803654 ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2803654 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2803654 ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2803654 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2803654 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2803654' 00:20:24.894 killing process with pid 2803654 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2803654 00:20:24.894 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2803654 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:25.153 00:20:25.153 real 0m3.837s 00:20:25.153 user 0m2.223s 00:20:25.153 sys 0m2.138s 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.153 ************************************ 00:20:25.153 END TEST nvmf_async_init 00:20:25.153 ************************************ 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.153 ************************************ 00:20:25.153 START TEST dma 00:20:25.153 ************************************ 00:20:25.153 12:34:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:25.412 * Looking for test storage... 00:20:25.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:25.412 12:34:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.412 12:34:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.412 12:34:30 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.412 --rc genhtml_branch_coverage=1 00:20:25.412 --rc genhtml_function_coverage=1 00:20:25.412 --rc genhtml_legend=1 00:20:25.412 --rc geninfo_all_blocks=1 00:20:25.412 --rc geninfo_unexecuted_blocks=1 00:20:25.412 00:20:25.412 ' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.412 --rc genhtml_branch_coverage=1 00:20:25.412 --rc genhtml_function_coverage=1 00:20:25.412 --rc genhtml_legend=1 00:20:25.412 --rc geninfo_all_blocks=1 00:20:25.412 --rc geninfo_unexecuted_blocks=1 00:20:25.412 00:20:25.412 ' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.412 --rc genhtml_branch_coverage=1 00:20:25.412 --rc genhtml_function_coverage=1 00:20:25.412 --rc genhtml_legend=1 00:20:25.412 --rc geninfo_all_blocks=1 00:20:25.412 --rc geninfo_unexecuted_blocks=1 00:20:25.412 00:20:25.412 ' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.412 --rc genhtml_branch_coverage=1 00:20:25.412 --rc genhtml_function_coverage=1 00:20:25.412 --rc genhtml_legend=1 00:20:25.412 --rc geninfo_all_blocks=1 00:20:25.412 --rc geninfo_unexecuted_blocks=1 00:20:25.412 00:20:25.412 ' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.412 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.413 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.413 12:34:31 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:27.951 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:27.951 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:27.951 Found net devices under 0000:83:00.0: mlx_0_0 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.951 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:27.951 Found net devices under 0000:83:00.1: mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:27.952 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:27.952 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:27.952 altname enp131s0f0np0 00:20:27.952 inet 192.168.100.8/24 scope global mlx_0_0 00:20:27.952 valid_lft forever preferred_lft forever 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:27.952 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:27.952 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:27.952 altname enp131s0f1np1 00:20:27.952 inet 192.168.100.9/24 scope global mlx_0_1 00:20:27.952 valid_lft forever preferred_lft forever 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:27.952 192.168.100.9' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:27.952 192.168.100.9' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:27.952 192.168.100.9' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=2805161 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 2805161 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 2805161 ']' 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.952 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:27.952 [2024-11-20 12:34:33.500939] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:27.953 [2024-11-20 12:34:33.501028] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.953 [2024-11-20 12:34:33.573671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:27.953 [2024-11-20 12:34:33.635108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.953 [2024-11-20 12:34:33.635165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.953 [2024-11-20 12:34:33.635181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.953 [2024-11-20 12:34:33.635195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.953 [2024-11-20 12:34:33.635206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.953 [2024-11-20 12:34:33.636347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.953 [2024-11-20 12:34:33.636354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.211 [2024-11-20 12:34:33.852272] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6fb560/0x6ffa50) succeed. 00:20:28.211 [2024-11-20 12:34:33.865894] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6fcab0/0x7410f0) succeed. 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.211 12:34:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.470 Malloc0 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.470 [2024-11-20 12:34:34.053698] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:28.470 { 00:20:28.470 "params": { 00:20:28.470 "name": "Nvme$subsystem", 00:20:28.470 "trtype": "$TEST_TRANSPORT", 00:20:28.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.470 "adrfam": "ipv4", 00:20:28.470 "trsvcid": "$NVMF_PORT", 00:20:28.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.470 "hdgst": ${hdgst:-false}, 00:20:28.470 "ddgst": ${ddgst:-false} 00:20:28.470 }, 00:20:28.470 "method": "bdev_nvme_attach_controller" 00:20:28.470 } 00:20:28.470 EOF 00:20:28.470 )") 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:20:28.470 12:34:34 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:28.470 "params": { 00:20:28.470 "name": "Nvme0", 00:20:28.470 "trtype": "rdma", 00:20:28.470 "traddr": "192.168.100.8", 00:20:28.470 "adrfam": "ipv4", 00:20:28.470 "trsvcid": "4420", 00:20:28.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:28.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:28.470 "hdgst": false, 00:20:28.470 "ddgst": false 00:20:28.470 }, 00:20:28.470 "method": "bdev_nvme_attach_controller" 00:20:28.470 }' 00:20:28.471 [2024-11-20 12:34:34.108425] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:28.471 [2024-11-20 12:34:34.108528] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805193 ] 00:20:28.471 [2024-11-20 12:34:34.181963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:28.729 [2024-11-20 12:34:34.247973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.729 [2024-11-20 12:34:34.248004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.997 bdev Nvme0n1 reports 1 memory domains 00:20:33.997 bdev Nvme0n1 supports RDMA memory domain 00:20:33.997 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:33.997 ========================================================================== 00:20:33.997 Latency [us] 00:20:33.997 IOPS MiB/s Average min max 00:20:33.997 Core 2: 15395.40 60.14 1038.34 420.99 17016.73 00:20:33.997 Core 3: 15572.78 60.83 1026.52 439.92 16839.42 00:20:33.997 ========================================================================== 00:20:33.997 Total : 30968.17 120.97 1032.39 420.99 17016.73 00:20:33.997 00:20:33.997 Total operations: 154857, translate 154857 pull_push 0 memzero 0 00:20:33.997 12:34:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:33.997 12:34:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:20:33.997 12:34:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:20:33.997 [2024-11-20 12:34:39.730973] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:33.997 [2024-11-20 12:34:39.731075] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805670 ] 00:20:34.255 [2024-11-20 12:34:39.803744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:34.255 [2024-11-20 12:34:39.869086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.255 [2024-11-20 12:34:39.869091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.520 bdev Malloc0 reports 2 memory domains 00:20:39.520 bdev Malloc0 doesn't support RDMA memory domain 00:20:39.520 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:39.520 ========================================================================== 00:20:39.520 Latency [us] 00:20:39.520 IOPS MiB/s Average min max 00:20:39.520 Core 2: 10674.52 41.70 1497.85 576.42 1884.25 00:20:39.520 Core 3: 10749.28 41.99 1487.41 569.44 2585.69 00:20:39.520 ========================================================================== 00:20:39.520 Total : 21423.80 83.69 1492.61 569.44 2585.69 00:20:39.520 00:20:39.520 Total operations: 107170, translate 0 pull_push 428680 memzero 0 00:20:39.520 12:34:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:39.520 12:34:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:39.520 12:34:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:39.520 12:34:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:39.520 Ignoring -M option 00:20:39.520 [2024-11-20 12:34:45.224392] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:39.520 [2024-11-20 12:34:45.224499] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806156 ] 00:20:39.777 [2024-11-20 12:34:45.297059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.777 [2024-11-20 12:34:45.362548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.777 [2024-11-20 12:34:45.362581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.333 bdev 982efda3-f58f-4965-9b91-7a83021f1e67 reports 1 memory domains 00:20:46.333 bdev 982efda3-f58f-4965-9b91-7a83021f1e67 supports RDMA memory domain 00:20:46.333 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:46.333 ========================================================================== 00:20:46.333 Latency [us] 00:20:46.333 IOPS MiB/s Average min max 00:20:46.333 Core 2: 58730.25 229.42 271.36 110.29 5313.87 00:20:46.333 Core 3: 60627.21 236.83 262.87 102.39 5384.34 00:20:46.333 ========================================================================== 00:20:46.333 Total : 119357.46 466.24 267.04 102.39 5384.34 00:20:46.333 00:20:46.333 Total operations: 596862, translate 0 pull_push 0 memzero 596862 00:20:46.333 12:34:50 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:46.333 [2024-11-20 12:34:50.981750] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.708 Initializing NVMe Controllers 00:20:47.708 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:47.708 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:47.708 Initialization complete. Launching workers. 00:20:47.708 ======================================================== 00:20:47.708 Latency(us) 00:20:47.708 Device Information : IOPS MiB/s Average min max 00:20:47.708 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2032.00 7.94 7926.94 5993.18 8985.24 00:20:47.708 ======================================================== 00:20:47.708 Total : 2032.00 7.94 7926.94 5993.18 8985.24 00:20:47.708 00:20:47.708 12:34:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:47.708 12:34:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:47.708 12:34:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:47.708 12:34:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:47.708 [2024-11-20 12:34:53.387058] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:47.708 [2024-11-20 12:34:53.387149] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806828 ] 00:20:47.708 [2024-11-20 12:34:53.460355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:47.967 [2024-11-20 12:34:53.526723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.967 [2024-11-20 12:34:53.526758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.233 bdev a2132e3c-08ed-4987-a60f-c3735979c238 reports 1 memory domains 00:20:53.233 bdev a2132e3c-08ed-4987-a60f-c3735979c238 supports RDMA memory domain 00:20:53.233 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:53.233 ========================================================================== 00:20:53.233 Latency [us] 00:20:53.233 IOPS MiB/s Average min max 00:20:53.233 Core 2: 13682.58 53.45 1168.45 26.11 11272.66 00:20:53.233 Core 3: 13485.45 52.68 1185.56 16.83 10931.87 00:20:53.233 ========================================================================== 00:20:53.233 Total : 27168.03 106.13 1176.94 16.83 11272.66 00:20:53.233 00:20:53.233 Total operations: 135886, translate 135777 pull_push 0 memzero 109 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:53.492 rmmod nvme_rdma 00:20:53.492 rmmod nvme_fabrics 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 2805161 ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 2805161 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 2805161 ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 2805161 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805161 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805161' 00:20:53.492 killing process with pid 2805161 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 2805161 00:20:53.492 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 2805161 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:53.750 00:20:53.750 real 0m28.493s 00:20:53.750 user 1m34.984s 00:20:53.750 sys 0m2.993s 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:53.750 ************************************ 00:20:53.750 END TEST dma 00:20:53.750 ************************************ 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.750 ************************************ 00:20:53.750 START TEST nvmf_identify 00:20:53.750 ************************************ 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:53.750 * Looking for test storage... 00:20:53.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:53.750 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:54.009 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.010 --rc genhtml_branch_coverage=1 00:20:54.010 --rc genhtml_function_coverage=1 00:20:54.010 --rc genhtml_legend=1 00:20:54.010 --rc geninfo_all_blocks=1 00:20:54.010 --rc geninfo_unexecuted_blocks=1 00:20:54.010 00:20:54.010 ' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.010 --rc genhtml_branch_coverage=1 00:20:54.010 --rc genhtml_function_coverage=1 00:20:54.010 --rc genhtml_legend=1 00:20:54.010 --rc geninfo_all_blocks=1 00:20:54.010 --rc geninfo_unexecuted_blocks=1 00:20:54.010 00:20:54.010 ' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.010 --rc genhtml_branch_coverage=1 00:20:54.010 --rc genhtml_function_coverage=1 00:20:54.010 --rc genhtml_legend=1 00:20:54.010 --rc geninfo_all_blocks=1 00:20:54.010 --rc geninfo_unexecuted_blocks=1 00:20:54.010 00:20:54.010 ' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.010 --rc genhtml_branch_coverage=1 00:20:54.010 --rc genhtml_function_coverage=1 00:20:54.010 --rc genhtml_legend=1 00:20:54.010 --rc geninfo_all_blocks=1 00:20:54.010 --rc geninfo_unexecuted_blocks=1 00:20:54.010 00:20:54.010 ' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.010 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.011 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.011 12:34:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:20:56.550 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.550 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:20:56.550 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:20:56.551 Found net devices under 0000:83:00.0: mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:20:56.551 Found net devices under 0000:83:00.1: mlx_0_1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:56.551 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:56.551 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:20:56.551 altname enp131s0f0np0 00:20:56.551 inet 192.168.100.8/24 scope global mlx_0_0 00:20:56.551 valid_lft forever preferred_lft forever 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:56.551 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:56.551 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:20:56.551 altname enp131s0f1np1 00:20:56.551 inet 192.168.100.9/24 scope global mlx_0_1 00:20:56.551 valid_lft forever preferred_lft forever 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:56.551 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:56.552 192.168.100.9' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:56.552 192.168.100.9' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:56.552 192.168.100.9' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2808684 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2808684 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2808684 ']' 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.552 12:35:01 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.552 [2024-11-20 12:35:02.007910] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:56.552 [2024-11-20 12:35:02.008015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.552 [2024-11-20 12:35:02.082925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.552 [2024-11-20 12:35:02.146581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.552 [2024-11-20 12:35:02.146642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.552 [2024-11-20 12:35:02.146658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.552 [2024-11-20 12:35:02.146671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.552 [2024-11-20 12:35:02.146683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.552 [2024-11-20 12:35:02.147973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.552 [2024-11-20 12:35:02.148029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.552 [2024-11-20 12:35:02.148081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.552 [2024-11-20 12:35:02.148085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.552 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.552 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:56.552 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:56.552 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.552 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.810 [2024-11-20 12:35:02.323118] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b55df0/0x1b5a2e0) succeed. 00:20:56.810 [2024-11-20 12:35:02.338311] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b57480/0x1b9b980) succeed. 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:56.810 Malloc0 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.810 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.071 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 [2024-11-20 12:35:02.591238] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 [ 00:20:57.072 { 00:20:57.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.072 "subtype": "Discovery", 00:20:57.072 "listen_addresses": [ 00:20:57.072 { 00:20:57.072 "trtype": "RDMA", 00:20:57.072 "adrfam": "IPv4", 00:20:57.072 "traddr": "192.168.100.8", 00:20:57.072 "trsvcid": "4420" 00:20:57.072 } 00:20:57.072 ], 00:20:57.072 "allow_any_host": true, 00:20:57.072 "hosts": [] 00:20:57.072 }, 00:20:57.072 { 00:20:57.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.072 "subtype": "NVMe", 00:20:57.072 "listen_addresses": [ 00:20:57.072 { 00:20:57.072 "trtype": "RDMA", 00:20:57.072 "adrfam": "IPv4", 00:20:57.072 "traddr": "192.168.100.8", 00:20:57.072 "trsvcid": "4420" 00:20:57.072 } 00:20:57.072 ], 00:20:57.072 "allow_any_host": true, 00:20:57.072 "hosts": [], 00:20:57.072 "serial_number": "SPDK00000000000001", 00:20:57.072 "model_number": "SPDK bdev Controller", 00:20:57.072 "max_namespaces": 32, 00:20:57.072 "min_cntlid": 1, 00:20:57.072 "max_cntlid": 65519, 00:20:57.072 "namespaces": [ 00:20:57.072 { 00:20:57.072 "nsid": 1, 00:20:57.072 "bdev_name": "Malloc0", 00:20:57.072 "name": "Malloc0", 00:20:57.072 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:57.072 "eui64": "ABCDEF0123456789", 00:20:57.072 "uuid": "373016c5-e2f5-463d-8ef3-4c6ad1aeb2eb" 00:20:57.072 } 00:20:57.072 ] 00:20:57.072 } 00:20:57.072 ] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:57.072 [2024-11-20 12:35:02.637740] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:57.072 [2024-11-20 12:35:02.637794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808797 ] 00:20:57.072 [2024-11-20 12:35:02.726995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:57.072 [2024-11-20 12:35:02.727106] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:57.072 [2024-11-20 12:35:02.727133] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:57.072 [2024-11-20 12:35:02.727142] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:57.072 [2024-11-20 12:35:02.727187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:57.072 [2024-11-20 12:35:02.746116] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:57.072 [2024-11-20 12:35:02.764640] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:57.072 [2024-11-20 12:35:02.764664] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:57.072 [2024-11-20 12:35:02.764678] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764689] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764699] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764708] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764718] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764728] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764737] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764747] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764756] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764766] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764776] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764785] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764795] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764804] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764814] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764823] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764833] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764843] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764852] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764862] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764872] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764881] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764891] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764900] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764910] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764919] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764929] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764938] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764948] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764958] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764967] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.764979] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:57.072 [2024-11-20 12:35:02.764989] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:57.072 [2024-11-20 12:35:02.764996] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:57.072 [2024-11-20 12:35:02.765021] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.765042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180b00 00:20:57.072 [2024-11-20 12:35:02.769509] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.072 [2024-11-20 12:35:02.769530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:57.072 [2024-11-20 12:35:02.769543] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.769562] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:57.072 [2024-11-20 12:35:02.769575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:57.072 [2024-11-20 12:35:02.769587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:57.072 [2024-11-20 12:35:02.769609] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.769624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.072 [2024-11-20 12:35:02.769656] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.072 [2024-11-20 12:35:02.769666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:57.072 [2024-11-20 12:35:02.769678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:57.072 [2024-11-20 12:35:02.769687] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.072 [2024-11-20 12:35:02.769698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:57.073 [2024-11-20 12:35:02.769711] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.769745] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.769755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.769766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:57.073 [2024-11-20 12:35:02.769775] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.769799] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.769839] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.769849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.769864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.769874] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769889] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.769926] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.769936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.769947] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:57.073 [2024-11-20 12:35:02.769956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.769965] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.769976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.770088] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:57.073 [2024-11-20 12:35:02.770097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.770112] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.770152] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:57.073 [2024-11-20 12:35:02.770182] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770196] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.770232] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770253] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:57.073 [2024-11-20 12:35:02.770262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770271] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770283] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:57.073 [2024-11-20 12:35:02.770297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770319] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180b00 00:20:57.073 [2024-11-20 12:35:02.770390] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770415] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:57.073 [2024-11-20 12:35:02.770425] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:57.073 [2024-11-20 12:35:02.770433] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:57.073 [2024-11-20 12:35:02.770443] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:57.073 [2024-11-20 12:35:02.770452] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:57.073 [2024-11-20 12:35:02.770460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770470] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770513] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.770557] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770582] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.073 [2024-11-20 12:35:02.770604] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.073 [2024-11-20 12:35:02.770626] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.073 [2024-11-20 12:35:02.770648] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.073 [2024-11-20 12:35:02.770668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770677] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:57.073 [2024-11-20 12:35:02.770714] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.073 [2024-11-20 12:35:02.770750] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770772] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:57.073 [2024-11-20 12:35:02.770781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:57.073 [2024-11-20 12:35:02.770790] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770807] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180b00 00:20:57.073 [2024-11-20 12:35:02.770856] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.073 [2024-11-20 12:35:02.770866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:57.073 [2024-11-20 12:35:02.770878] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770894] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:57.073 [2024-11-20 12:35:02.770937] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.073 [2024-11-20 12:35:02.770954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180b00 00:20:57.073 [2024-11-20 12:35:02.770967] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.770978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.074 [2024-11-20 12:35:02.771004] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.074 [2024-11-20 12:35:02.771015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:57.074 [2024-11-20 12:35:02.771036] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.771050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180b00 00:20:57.074 [2024-11-20 12:35:02.771060] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.771070] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.074 [2024-11-20 12:35:02.771079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:57.074 [2024-11-20 12:35:02.771089] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.771099] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.074 [2024-11-20 12:35:02.771108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:57.074 [2024-11-20 12:35:02.771130] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.771144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180b00 00:20:57.074 [2024-11-20 12:35:02.771154] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180b00 00:20:57.074 [2024-11-20 12:35:02.771183] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.074 [2024-11-20 12:35:02.771194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:57.074 [2024-11-20 12:35:02.771213] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180b00 00:20:57.074 ===================================================== 00:20:57.074 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:57.074 ===================================================== 00:20:57.074 Controller Capabilities/Features 00:20:57.074 ================================ 00:20:57.074 Vendor ID: 0000 00:20:57.074 Subsystem Vendor ID: 0000 00:20:57.074 Serial Number: .................... 00:20:57.074 Model Number: ........................................ 00:20:57.074 Firmware Version: 25.01 00:20:57.074 Recommended Arb Burst: 0 00:20:57.074 IEEE OUI Identifier: 00 00 00 00:20:57.074 Multi-path I/O 00:20:57.074 May have multiple subsystem ports: No 00:20:57.074 May have multiple controllers: No 00:20:57.074 Associated with SR-IOV VF: No 00:20:57.074 Max Data Transfer Size: 131072 00:20:57.074 Max Number of Namespaces: 0 00:20:57.074 Max Number of I/O Queues: 1024 00:20:57.074 NVMe Specification Version (VS): 1.3 00:20:57.074 NVMe Specification Version (Identify): 1.3 00:20:57.074 Maximum Queue Entries: 128 00:20:57.074 Contiguous Queues Required: Yes 00:20:57.074 Arbitration Mechanisms Supported 00:20:57.074 Weighted Round Robin: Not Supported 00:20:57.074 Vendor Specific: Not Supported 00:20:57.074 Reset Timeout: 15000 ms 00:20:57.074 Doorbell Stride: 4 bytes 00:20:57.074 NVM Subsystem Reset: Not Supported 00:20:57.074 Command Sets Supported 00:20:57.074 NVM Command Set: Supported 00:20:57.074 Boot Partition: Not Supported 00:20:57.074 Memory Page Size Minimum: 4096 bytes 00:20:57.074 Memory Page Size Maximum: 4096 bytes 00:20:57.074 Persistent Memory Region: Not Supported 00:20:57.074 Optional Asynchronous Events Supported 00:20:57.074 Namespace Attribute Notices: Not Supported 00:20:57.074 Firmware Activation Notices: Not Supported 00:20:57.074 ANA Change Notices: Not Supported 00:20:57.074 PLE Aggregate Log Change Notices: Not Supported 00:20:57.074 LBA Status Info Alert Notices: Not Supported 00:20:57.074 EGE Aggregate Log Change Notices: Not Supported 00:20:57.074 Normal NVM Subsystem Shutdown event: Not Supported 00:20:57.074 Zone Descriptor Change Notices: Not Supported 00:20:57.074 Discovery Log Change Notices: Supported 00:20:57.074 Controller Attributes 00:20:57.074 128-bit Host Identifier: Not Supported 00:20:57.074 Non-Operational Permissive Mode: Not Supported 00:20:57.074 NVM Sets: Not Supported 00:20:57.074 Read Recovery Levels: Not Supported 00:20:57.074 Endurance Groups: Not Supported 00:20:57.074 Predictable Latency Mode: Not Supported 00:20:57.074 Traffic Based Keep ALive: Not Supported 00:20:57.074 Namespace Granularity: Not Supported 00:20:57.074 SQ Associations: Not Supported 00:20:57.074 UUID List: Not Supported 00:20:57.074 Multi-Domain Subsystem: Not Supported 00:20:57.074 Fixed Capacity Management: Not Supported 00:20:57.074 Variable Capacity Management: Not Supported 00:20:57.074 Delete Endurance Group: Not Supported 00:20:57.074 Delete NVM Set: Not Supported 00:20:57.074 Extended LBA Formats Supported: Not Supported 00:20:57.074 Flexible Data Placement Supported: Not Supported 00:20:57.074 00:20:57.074 Controller Memory Buffer Support 00:20:57.074 ================================ 00:20:57.074 Supported: No 00:20:57.074 00:20:57.074 Persistent Memory Region Support 00:20:57.074 ================================ 00:20:57.074 Supported: No 00:20:57.074 00:20:57.074 Admin Command Set Attributes 00:20:57.074 ============================ 00:20:57.074 Security Send/Receive: Not Supported 00:20:57.074 Format NVM: Not Supported 00:20:57.074 Firmware Activate/Download: Not Supported 00:20:57.074 Namespace Management: Not Supported 00:20:57.074 Device Self-Test: Not Supported 00:20:57.074 Directives: Not Supported 00:20:57.074 NVMe-MI: Not Supported 00:20:57.074 Virtualization Management: Not Supported 00:20:57.074 Doorbell Buffer Config: Not Supported 00:20:57.074 Get LBA Status Capability: Not Supported 00:20:57.074 Command & Feature Lockdown Capability: Not Supported 00:20:57.074 Abort Command Limit: 1 00:20:57.074 Async Event Request Limit: 4 00:20:57.074 Number of Firmware Slots: N/A 00:20:57.074 Firmware Slot 1 Read-Only: N/A 00:20:57.074 Firmware Activation Without Reset: N/A 00:20:57.074 Multiple Update Detection Support: N/A 00:20:57.074 Firmware Update Granularity: No Information Provided 00:20:57.074 Per-Namespace SMART Log: No 00:20:57.074 Asymmetric Namespace Access Log Page: Not Supported 00:20:57.074 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:57.074 Command Effects Log Page: Not Supported 00:20:57.074 Get Log Page Extended Data: Supported 00:20:57.074 Telemetry Log Pages: Not Supported 00:20:57.074 Persistent Event Log Pages: Not Supported 00:20:57.074 Supported Log Pages Log Page: May Support 00:20:57.074 Commands Supported & Effects Log Page: Not Supported 00:20:57.074 Feature Identifiers & Effects Log Page:May Support 00:20:57.074 NVMe-MI Commands & Effects Log Page: May Support 00:20:57.074 Data Area 4 for Telemetry Log: Not Supported 00:20:57.074 Error Log Page Entries Supported: 128 00:20:57.074 Keep Alive: Not Supported 00:20:57.074 00:20:57.074 NVM Command Set Attributes 00:20:57.074 ========================== 00:20:57.074 Submission Queue Entry Size 00:20:57.074 Max: 1 00:20:57.074 Min: 1 00:20:57.074 Completion Queue Entry Size 00:20:57.074 Max: 1 00:20:57.074 Min: 1 00:20:57.074 Number of Namespaces: 0 00:20:57.074 Compare Command: Not Supported 00:20:57.074 Write Uncorrectable Command: Not Supported 00:20:57.074 Dataset Management Command: Not Supported 00:20:57.074 Write Zeroes Command: Not Supported 00:20:57.074 Set Features Save Field: Not Supported 00:20:57.074 Reservations: Not Supported 00:20:57.074 Timestamp: Not Supported 00:20:57.074 Copy: Not Supported 00:20:57.074 Volatile Write Cache: Not Present 00:20:57.074 Atomic Write Unit (Normal): 1 00:20:57.074 Atomic Write Unit (PFail): 1 00:20:57.074 Atomic Compare & Write Unit: 1 00:20:57.074 Fused Compare & Write: Supported 00:20:57.074 Scatter-Gather List 00:20:57.074 SGL Command Set: Supported 00:20:57.074 SGL Keyed: Supported 00:20:57.074 SGL Bit Bucket Descriptor: Not Supported 00:20:57.074 SGL Metadata Pointer: Not Supported 00:20:57.074 Oversized SGL: Not Supported 00:20:57.074 SGL Metadata Address: Not Supported 00:20:57.074 SGL Offset: Supported 00:20:57.074 Transport SGL Data Block: Not Supported 00:20:57.074 Replay Protected Memory Block: Not Supported 00:20:57.074 00:20:57.074 Firmware Slot Information 00:20:57.074 ========================= 00:20:57.075 Active slot: 0 00:20:57.075 00:20:57.075 00:20:57.075 Error Log 00:20:57.075 ========= 00:20:57.075 00:20:57.075 Active Namespaces 00:20:57.075 ================= 00:20:57.075 Discovery Log Page 00:20:57.075 ================== 00:20:57.075 Generation Counter: 2 00:20:57.075 Number of Records: 2 00:20:57.075 Record Format: 0 00:20:57.075 00:20:57.075 Discovery Log Entry 0 00:20:57.075 ---------------------- 00:20:57.075 Transport Type: 1 (RDMA) 00:20:57.075 Address Family: 1 (IPv4) 00:20:57.075 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:57.075 Entry Flags: 00:20:57.075 Duplicate Returned Information: 1 00:20:57.075 Explicit Persistent Connection Support for Discovery: 1 00:20:57.075 Transport Requirements: 00:20:57.075 Secure Channel: Not Required 00:20:57.075 Port ID: 0 (0x0000) 00:20:57.075 Controller ID: 65535 (0xffff) 00:20:57.075 Admin Max SQ Size: 128 00:20:57.075 Transport Service Identifier: 4420 00:20:57.075 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:57.075 Transport Address: 192.168.100.8 00:20:57.075 Transport Specific Address Subtype - RDMA 00:20:57.075 RDMA QP Service Type: 1 (Reliable Connected) 00:20:57.075 RDMA Provider Type: 1 (No provider specified) 00:20:57.075 RDMA CM Service: 1 (RDMA_CM) 00:20:57.075 Discovery Log Entry 1 00:20:57.075 ---------------------- 00:20:57.075 Transport Type: 1 (RDMA) 00:20:57.075 Address Family: 1 (IPv4) 00:20:57.075 Subsystem Type: 2 (NVM Subsystem) 00:20:57.075 Entry Flags: 00:20:57.075 Duplicate Returned Information: 0 00:20:57.075 Explicit Persistent Connection Support for Discovery: 0 00:20:57.075 Transport Requirements: 00:20:57.075 Secure Channel: Not Required 00:20:57.075 Port ID: 0 (0x0000) 00:20:57.075 Controller ID: 65535 (0xffff) 00:20:57.075 Admin Max SQ Size: [2024-11-20 12:35:02.771330] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:57.075 [2024-11-20 12:35:02.771351] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 28566 doesn't match qid 00:20:57.075 [2024-11-20 12:35:02.771371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32563 cdw0:1538ad0 sqhd:7320 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771382] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 28566 doesn't match qid 00:20:57.075 [2024-11-20 12:35:02.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32563 cdw0:1538ad0 sqhd:7320 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771405] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 28566 doesn't match qid 00:20:57.075 [2024-11-20 12:35:02.771417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32563 cdw0:1538ad0 sqhd:7320 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771427] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 28566 doesn't match qid 00:20:57.075 [2024-11-20 12:35:02.771439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32563 cdw0:1538ad0 sqhd:7320 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771454] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771503] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771529] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771552] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771583] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771603] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:57.075 [2024-11-20 12:35:02.771611] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:57.075 [2024-11-20 12:35:02.771621] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771679] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771699] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771714] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771754] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771788] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771823] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771842] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771857] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771899] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771918] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771933] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.771946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.771972] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.771992] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772006] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.772046] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.772055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.772065] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772079] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.772117] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.772127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.772136] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772151] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.772187] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.075 [2024-11-20 12:35:02.772197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:57.075 [2024-11-20 12:35:02.772206] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772221] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.075 [2024-11-20 12:35:02.772233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.075 [2024-11-20 12:35:02.772257] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772276] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772290] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772326] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772345] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772360] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772402] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772421] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772436] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772475] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772502] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772522] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772566] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772585] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772600] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772639] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772658] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772673] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772712] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772731] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772746] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772782] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772801] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772816] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772855] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772874] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772889] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.772925] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.772934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.772944] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772963] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.772976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.773000] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.773010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.773020] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773034] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.773070] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.773090] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773104] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.773150] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.773159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.773169] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773184] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.773223] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.773232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.773242] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773256] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.076 [2024-11-20 12:35:02.773269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.076 [2024-11-20 12:35:02.773296] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.076 [2024-11-20 12:35:02.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:57.076 [2024-11-20 12:35:02.773315] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.773330] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.773342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.077 [2024-11-20 12:35:02.773363] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.077 [2024-11-20 12:35:02.773372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:57.077 [2024-11-20 12:35:02.773386] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.773401] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.773414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.077 [2024-11-20 12:35:02.773438] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.077 [2024-11-20 12:35:02.773447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:57.077 [2024-11-20 12:35:02.773457] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.773472] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.777502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.077 [2024-11-20 12:35:02.777529] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.077 [2024-11-20 12:35:02.777540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000b p:0 m:0 dnr:0 00:20:57.077 [2024-11-20 12:35:02.777550] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180b00 00:20:57.077 [2024-11-20 12:35:02.777561] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:20:57.338 128 00:20:57.338 Transport Service Identifier: 4420 00:20:57.338 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:57.338 Transport Address: 192.168.100.8 00:20:57.338 Transport Specific Address Subtype - RDMA 00:20:57.338 RDMA QP Service Type: 1 (Reliable Connected) 00:20:57.338 RDMA Provider Type: 1 (No provider specified) 00:20:57.338 RDMA CM Service: 1 (RDMA_CM) 00:20:57.338 12:35:02 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:57.338 [2024-11-20 12:35:02.873292] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:57.338 [2024-11-20 12:35:02.873354] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808801 ] 00:20:57.338 [2024-11-20 12:35:02.954072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:57.338 [2024-11-20 12:35:02.954176] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:57.338 [2024-11-20 12:35:02.954206] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:57.338 [2024-11-20 12:35:02.954215] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:57.338 [2024-11-20 12:35:02.954254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:57.338 [2024-11-20 12:35:02.967518] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:57.338 [2024-11-20 12:35:02.986092] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:57.338 [2024-11-20 12:35:02.986110] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:57.338 [2024-11-20 12:35:02.986135] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.338 [2024-11-20 12:35:02.986148] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986158] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986167] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986176] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986186] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986195] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986205] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986214] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986224] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986234] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986243] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986253] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986262] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986272] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986281] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986297] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986307] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986316] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986325] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986335] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986344] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986354] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986364] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986373] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986383] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986392] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986402] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986411] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986421] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986430] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986439] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:57.339 [2024-11-20 12:35:02.986450] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:57.339 [2024-11-20 12:35:02.986458] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:57.339 [2024-11-20 12:35:02.986486] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.986507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180b00 00:20:57.339 [2024-11-20 12:35:02.991510] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.991530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.991542] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991554] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:57.339 [2024-11-20 12:35:02.991566] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:57.339 [2024-11-20 12:35:02.991577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:57.339 [2024-11-20 12:35:02.991597] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.991641] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.991652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.991662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:57.339 [2024-11-20 12:35:02.991671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:57.339 [2024-11-20 12:35:02.991696] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.991730] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.991740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.991751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:57.339 [2024-11-20 12:35:02.991760] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.991784] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.991824] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.991835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.991845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.991860] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991875] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.991913] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.991923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.991933] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:57.339 [2024-11-20 12:35:02.991942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.991951] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.991963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.992073] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:57.339 [2024-11-20 12:35:02.992082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.992096] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.992109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.992134] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.992144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.992154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:57.339 [2024-11-20 12:35:02.992163] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.992178] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.992191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.339 [2024-11-20 12:35:02.992219] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.992229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.992239] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:57.339 [2024-11-20 12:35:02.992248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:57.339 [2024-11-20 12:35:02.992257] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.992269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:57.339 [2024-11-20 12:35:02.992284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:57.339 [2024-11-20 12:35:02.992301] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.339 [2024-11-20 12:35:02.992314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180b00 00:20:57.339 [2024-11-20 12:35:02.992367] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.339 [2024-11-20 12:35:02.992378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:57.339 [2024-11-20 12:35:02.992391] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:57.340 [2024-11-20 12:35:02.992401] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:57.340 [2024-11-20 12:35:02.992410] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:57.340 [2024-11-20 12:35:02.992418] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:57.340 [2024-11-20 12:35:02.992427] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:57.340 [2024-11-20 12:35:02.992436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992445] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992477] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.340 [2024-11-20 12:35:02.992532] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.992542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.992555] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.340 [2024-11-20 12:35:02.992578] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.340 [2024-11-20 12:35:02.992600] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.340 [2024-11-20 12:35:02.992622] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.340 [2024-11-20 12:35:02.992642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992651] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992683] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.340 [2024-11-20 12:35:02.992725] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.992752] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:57.340 [2024-11-20 12:35:02.992761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992770] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992811] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.340 [2024-11-20 12:35:02.992858] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.992868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.992947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992959] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.992974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.992991] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180b00 00:20:57.340 [2024-11-20 12:35:02.993042] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993075] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:57.340 [2024-11-20 12:35:02.993098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993109] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993138] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180b00 00:20:57.340 [2024-11-20 12:35:02.993205] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993254] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993285] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180b00 00:20:57.340 [2024-11-20 12:35:02.993339] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993374] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993444] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:57.340 [2024-11-20 12:35:02.993452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:57.340 [2024-11-20 12:35:02.993462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:57.340 [2024-11-20 12:35:02.993491] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.340 [2024-11-20 12:35:02.993518] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.340 [2024-11-20 12:35:02.993549] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993570] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993581] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993599] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993619] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.340 [2024-11-20 12:35:02.993655] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.340 [2024-11-20 12:35:02.993665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:57.340 [2024-11-20 12:35:02.993675] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180b00 00:20:57.340 [2024-11-20 12:35:02.993689] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.341 [2024-11-20 12:35:02.993723] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.993733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.993743] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993757] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.341 [2024-11-20 12:35:02.993795] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.993805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.993815] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993840] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180b00 00:20:57.341 [2024-11-20 12:35:02.993870] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180b00 00:20:57.341 [2024-11-20 12:35:02.993896] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180b00 00:20:57.341 [2024-11-20 12:35:02.993922] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180b00 00:20:57.341 [2024-11-20 12:35:02.993949] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.993959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.993979] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.993990] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.994003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.994020] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.994032] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.994041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.994052] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180b00 00:20:57.341 [2024-11-20 12:35:02.994061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.341 [2024-11-20 12:35:02.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:57.341 [2024-11-20 12:35:02.994085] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180b00 00:20:57.341 ===================================================== 00:20:57.341 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.341 ===================================================== 00:20:57.341 Controller Capabilities/Features 00:20:57.341 ================================ 00:20:57.341 Vendor ID: 8086 00:20:57.341 Subsystem Vendor ID: 8086 00:20:57.341 Serial Number: SPDK00000000000001 00:20:57.341 Model Number: SPDK bdev Controller 00:20:57.341 Firmware Version: 25.01 00:20:57.341 Recommended Arb Burst: 6 00:20:57.341 IEEE OUI Identifier: e4 d2 5c 00:20:57.341 Multi-path I/O 00:20:57.341 May have multiple subsystem ports: Yes 00:20:57.341 May have multiple controllers: Yes 00:20:57.341 Associated with SR-IOV VF: No 00:20:57.341 Max Data Transfer Size: 131072 00:20:57.341 Max Number of Namespaces: 32 00:20:57.341 Max Number of I/O Queues: 127 00:20:57.341 NVMe Specification Version (VS): 1.3 00:20:57.341 NVMe Specification Version (Identify): 1.3 00:20:57.341 Maximum Queue Entries: 128 00:20:57.341 Contiguous Queues Required: Yes 00:20:57.341 Arbitration Mechanisms Supported 00:20:57.341 Weighted Round Robin: Not Supported 00:20:57.341 Vendor Specific: Not Supported 00:20:57.341 Reset Timeout: 15000 ms 00:20:57.341 Doorbell Stride: 4 bytes 00:20:57.341 NVM Subsystem Reset: Not Supported 00:20:57.341 Command Sets Supported 00:20:57.341 NVM Command Set: Supported 00:20:57.341 Boot Partition: Not Supported 00:20:57.341 Memory Page Size Minimum: 4096 bytes 00:20:57.341 Memory Page Size Maximum: 4096 bytes 00:20:57.341 Persistent Memory Region: Not Supported 00:20:57.341 Optional Asynchronous Events Supported 00:20:57.341 Namespace Attribute Notices: Supported 00:20:57.341 Firmware Activation Notices: Not Supported 00:20:57.341 ANA Change Notices: Not Supported 00:20:57.341 PLE Aggregate Log Change Notices: Not Supported 00:20:57.341 LBA Status Info Alert Notices: Not Supported 00:20:57.341 EGE Aggregate Log Change Notices: Not Supported 00:20:57.341 Normal NVM Subsystem Shutdown event: Not Supported 00:20:57.341 Zone Descriptor Change Notices: Not Supported 00:20:57.341 Discovery Log Change Notices: Not Supported 00:20:57.341 Controller Attributes 00:20:57.341 128-bit Host Identifier: Supported 00:20:57.341 Non-Operational Permissive Mode: Not Supported 00:20:57.341 NVM Sets: Not Supported 00:20:57.341 Read Recovery Levels: Not Supported 00:20:57.341 Endurance Groups: Not Supported 00:20:57.341 Predictable Latency Mode: Not Supported 00:20:57.341 Traffic Based Keep ALive: Not Supported 00:20:57.341 Namespace Granularity: Not Supported 00:20:57.341 SQ Associations: Not Supported 00:20:57.341 UUID List: Not Supported 00:20:57.341 Multi-Domain Subsystem: Not Supported 00:20:57.341 Fixed Capacity Management: Not Supported 00:20:57.341 Variable Capacity Management: Not Supported 00:20:57.341 Delete Endurance Group: Not Supported 00:20:57.341 Delete NVM Set: Not Supported 00:20:57.341 Extended LBA Formats Supported: Not Supported 00:20:57.341 Flexible Data Placement Supported: Not Supported 00:20:57.341 00:20:57.341 Controller Memory Buffer Support 00:20:57.341 ================================ 00:20:57.341 Supported: No 00:20:57.341 00:20:57.341 Persistent Memory Region Support 00:20:57.341 ================================ 00:20:57.341 Supported: No 00:20:57.341 00:20:57.341 Admin Command Set Attributes 00:20:57.341 ============================ 00:20:57.341 Security Send/Receive: Not Supported 00:20:57.341 Format NVM: Not Supported 00:20:57.341 Firmware Activate/Download: Not Supported 00:20:57.341 Namespace Management: Not Supported 00:20:57.341 Device Self-Test: Not Supported 00:20:57.341 Directives: Not Supported 00:20:57.341 NVMe-MI: Not Supported 00:20:57.341 Virtualization Management: Not Supported 00:20:57.341 Doorbell Buffer Config: Not Supported 00:20:57.341 Get LBA Status Capability: Not Supported 00:20:57.341 Command & Feature Lockdown Capability: Not Supported 00:20:57.341 Abort Command Limit: 4 00:20:57.341 Async Event Request Limit: 4 00:20:57.341 Number of Firmware Slots: N/A 00:20:57.341 Firmware Slot 1 Read-Only: N/A 00:20:57.341 Firmware Activation Without Reset: N/A 00:20:57.341 Multiple Update Detection Support: N/A 00:20:57.341 Firmware Update Granularity: No Information Provided 00:20:57.341 Per-Namespace SMART Log: No 00:20:57.341 Asymmetric Namespace Access Log Page: Not Supported 00:20:57.341 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:57.341 Command Effects Log Page: Supported 00:20:57.341 Get Log Page Extended Data: Supported 00:20:57.341 Telemetry Log Pages: Not Supported 00:20:57.341 Persistent Event Log Pages: Not Supported 00:20:57.341 Supported Log Pages Log Page: May Support 00:20:57.341 Commands Supported & Effects Log Page: Not Supported 00:20:57.341 Feature Identifiers & Effects Log Page:May Support 00:20:57.341 NVMe-MI Commands & Effects Log Page: May Support 00:20:57.341 Data Area 4 for Telemetry Log: Not Supported 00:20:57.341 Error Log Page Entries Supported: 128 00:20:57.341 Keep Alive: Supported 00:20:57.341 Keep Alive Granularity: 10000 ms 00:20:57.341 00:20:57.341 NVM Command Set Attributes 00:20:57.341 ========================== 00:20:57.341 Submission Queue Entry Size 00:20:57.341 Max: 64 00:20:57.341 Min: 64 00:20:57.341 Completion Queue Entry Size 00:20:57.341 Max: 16 00:20:57.341 Min: 16 00:20:57.341 Number of Namespaces: 32 00:20:57.341 Compare Command: Supported 00:20:57.341 Write Uncorrectable Command: Not Supported 00:20:57.341 Dataset Management Command: Supported 00:20:57.341 Write Zeroes Command: Supported 00:20:57.341 Set Features Save Field: Not Supported 00:20:57.341 Reservations: Supported 00:20:57.341 Timestamp: Not Supported 00:20:57.341 Copy: Supported 00:20:57.341 Volatile Write Cache: Present 00:20:57.341 Atomic Write Unit (Normal): 1 00:20:57.342 Atomic Write Unit (PFail): 1 00:20:57.342 Atomic Compare & Write Unit: 1 00:20:57.342 Fused Compare & Write: Supported 00:20:57.342 Scatter-Gather List 00:20:57.342 SGL Command Set: Supported 00:20:57.342 SGL Keyed: Supported 00:20:57.342 SGL Bit Bucket Descriptor: Not Supported 00:20:57.342 SGL Metadata Pointer: Not Supported 00:20:57.342 Oversized SGL: Not Supported 00:20:57.342 SGL Metadata Address: Not Supported 00:20:57.342 SGL Offset: Supported 00:20:57.342 Transport SGL Data Block: Not Supported 00:20:57.342 Replay Protected Memory Block: Not Supported 00:20:57.342 00:20:57.342 Firmware Slot Information 00:20:57.342 ========================= 00:20:57.342 Active slot: 1 00:20:57.342 Slot 1 Firmware Revision: 25.01 00:20:57.342 00:20:57.342 00:20:57.342 Commands Supported and Effects 00:20:57.342 ============================== 00:20:57.342 Admin Commands 00:20:57.342 -------------- 00:20:57.342 Get Log Page (02h): Supported 00:20:57.342 Identify (06h): Supported 00:20:57.342 Abort (08h): Supported 00:20:57.342 Set Features (09h): Supported 00:20:57.342 Get Features (0Ah): Supported 00:20:57.342 Asynchronous Event Request (0Ch): Supported 00:20:57.342 Keep Alive (18h): Supported 00:20:57.342 I/O Commands 00:20:57.342 ------------ 00:20:57.342 Flush (00h): Supported LBA-Change 00:20:57.342 Write (01h): Supported LBA-Change 00:20:57.342 Read (02h): Supported 00:20:57.342 Compare (05h): Supported 00:20:57.342 Write Zeroes (08h): Supported LBA-Change 00:20:57.342 Dataset Management (09h): Supported LBA-Change 00:20:57.342 Copy (19h): Supported LBA-Change 00:20:57.342 00:20:57.342 Error Log 00:20:57.342 ========= 00:20:57.342 00:20:57.342 Arbitration 00:20:57.342 =========== 00:20:57.342 Arbitration Burst: 1 00:20:57.342 00:20:57.342 Power Management 00:20:57.342 ================ 00:20:57.342 Number of Power States: 1 00:20:57.342 Current Power State: Power State #0 00:20:57.342 Power State #0: 00:20:57.342 Max Power: 0.00 W 00:20:57.342 Non-Operational State: Operational 00:20:57.342 Entry Latency: Not Reported 00:20:57.342 Exit Latency: Not Reported 00:20:57.342 Relative Read Throughput: 0 00:20:57.342 Relative Read Latency: 0 00:20:57.342 Relative Write Throughput: 0 00:20:57.342 Relative Write Latency: 0 00:20:57.342 Idle Power: Not Reported 00:20:57.342 Active Power: Not Reported 00:20:57.342 Non-Operational Permissive Mode: Not Supported 00:20:57.342 00:20:57.342 Health Information 00:20:57.342 ================== 00:20:57.342 Critical Warnings: 00:20:57.342 Available Spare Space: OK 00:20:57.342 Temperature: OK 00:20:57.342 Device Reliability: OK 00:20:57.342 Read Only: No 00:20:57.342 Volatile Memory Backup: OK 00:20:57.342 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:57.342 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:57.342 Available Spare: 0% 00:20:57.342 Available Spare Threshold: 0% 00:20:57.342 Life Percentage [2024-11-20 12:35:02.994220] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994261] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994282] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994329] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:57.342 [2024-11-20 12:35:02.994348] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16080 doesn't match qid 00:20:57.342 [2024-11-20 12:35:02.994368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32523 cdw0:ede9f6f0 sqhd:b320 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994379] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16080 doesn't match qid 00:20:57.342 [2024-11-20 12:35:02.994393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32523 cdw0:ede9f6f0 sqhd:b320 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994403] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16080 doesn't match qid 00:20:57.342 [2024-11-20 12:35:02.994415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32523 cdw0:ede9f6f0 sqhd:b320 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994425] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16080 doesn't match qid 00:20:57.342 [2024-11-20 12:35:02.994437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32523 cdw0:ede9f6f0 sqhd:b320 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994452] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994495] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994522] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994545] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994572] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994593] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:57.342 [2024-11-20 12:35:02.994601] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:57.342 [2024-11-20 12:35:02.994611] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994625] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994660] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994680] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994695] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994735] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994755] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994770] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994811] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994831] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994846] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994883] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994903] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994918] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.994955] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.994975] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.994993] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.995007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.995031] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.342 [2024-11-20 12:35:02.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:57.342 [2024-11-20 12:35:02.995051] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.995066] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.342 [2024-11-20 12:35:02.995080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.342 [2024-11-20 12:35:02.995106] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.995116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.995127] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995141] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.995185] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.995195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.995205] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995220] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.995260] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.995269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.995279] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995294] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.995331] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.995341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.995351] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995365] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.995399] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.995409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.995423] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995439] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.995452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.999495] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.999524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.999535] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.999559] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.999574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:57.343 [2024-11-20 12:35:02.999602] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:57.343 [2024-11-20 12:35:02.999612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:20:57.343 [2024-11-20 12:35:02.999621] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180b00 00:20:57.343 [2024-11-20 12:35:02.999633] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:20:57.343 Used: 0% 00:20:57.343 Data Units Read: 0 00:20:57.343 Data Units Written: 0 00:20:57.343 Host Read Commands: 0 00:20:57.343 Host Write Commands: 0 00:20:57.343 Controller Busy Time: 0 minutes 00:20:57.343 Power Cycles: 0 00:20:57.343 Power On Hours: 0 hours 00:20:57.343 Unsafe Shutdowns: 0 00:20:57.343 Unrecoverable Media Errors: 0 00:20:57.343 Lifetime Error Log Entries: 0 00:20:57.343 Warning Temperature Time: 0 minutes 00:20:57.343 Critical Temperature Time: 0 minutes 00:20:57.343 00:20:57.343 Number of Queues 00:20:57.343 ================ 00:20:57.343 Number of I/O Submission Queues: 127 00:20:57.343 Number of I/O Completion Queues: 127 00:20:57.343 00:20:57.343 Active Namespaces 00:20:57.343 ================= 00:20:57.343 Namespace ID:1 00:20:57.343 Error Recovery Timeout: Unlimited 00:20:57.343 Command Set Identifier: NVM (00h) 00:20:57.343 Deallocate: Supported 00:20:57.343 Deallocated/Unwritten Error: Not Supported 00:20:57.343 Deallocated Read Value: Unknown 00:20:57.343 Deallocate in Write Zeroes: Not Supported 00:20:57.343 Deallocated Guard Field: 0xFFFF 00:20:57.343 Flush: Supported 00:20:57.343 Reservation: Supported 00:20:57.343 Namespace Sharing Capabilities: Multiple Controllers 00:20:57.343 Size (in LBAs): 131072 (0GiB) 00:20:57.343 Capacity (in LBAs): 131072 (0GiB) 00:20:57.343 Utilization (in LBAs): 131072 (0GiB) 00:20:57.343 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:57.343 EUI64: ABCDEF0123456789 00:20:57.343 UUID: 373016c5-e2f5-463d-8ef3-4c6ad1aeb2eb 00:20:57.343 Thin Provisioning: Not Supported 00:20:57.343 Per-NS Atomic Units: Yes 00:20:57.343 Atomic Boundary Size (Normal): 0 00:20:57.343 Atomic Boundary Size (PFail): 0 00:20:57.343 Atomic Boundary Offset: 0 00:20:57.343 Maximum Single Source Range Length: 65535 00:20:57.343 Maximum Copy Length: 65535 00:20:57.343 Maximum Source Range Count: 1 00:20:57.343 NGUID/EUI64 Never Reused: No 00:20:57.343 Namespace Write Protected: No 00:20:57.343 Number of LBA Formats: 1 00:20:57.343 Current LBA Format: LBA Format #00 00:20:57.343 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:57.343 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.343 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:57.343 rmmod nvme_rdma 00:20:57.602 rmmod nvme_fabrics 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2808684 ']' 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2808684 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2808684 ']' 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2808684 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808684 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808684' 00:20:57.602 killing process with pid 2808684 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2808684 00:20:57.602 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2808684 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:57.861 00:20:57.861 real 0m4.029s 00:20:57.861 user 0m5.840s 00:20:57.861 sys 0m2.218s 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.861 ************************************ 00:20:57.861 END TEST nvmf_identify 00:20:57.861 ************************************ 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.861 ************************************ 00:20:57.861 START TEST nvmf_perf 00:20:57.861 ************************************ 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:57.861 * Looking for test storage... 00:20:57.861 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.861 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.122 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.123 --rc genhtml_branch_coverage=1 00:20:58.123 --rc genhtml_function_coverage=1 00:20:58.123 --rc genhtml_legend=1 00:20:58.123 --rc geninfo_all_blocks=1 00:20:58.123 --rc geninfo_unexecuted_blocks=1 00:20:58.123 00:20:58.123 ' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.123 --rc genhtml_branch_coverage=1 00:20:58.123 --rc genhtml_function_coverage=1 00:20:58.123 --rc genhtml_legend=1 00:20:58.123 --rc geninfo_all_blocks=1 00:20:58.123 --rc geninfo_unexecuted_blocks=1 00:20:58.123 00:20:58.123 ' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.123 --rc genhtml_branch_coverage=1 00:20:58.123 --rc genhtml_function_coverage=1 00:20:58.123 --rc genhtml_legend=1 00:20:58.123 --rc geninfo_all_blocks=1 00:20:58.123 --rc geninfo_unexecuted_blocks=1 00:20:58.123 00:20:58.123 ' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.123 --rc genhtml_branch_coverage=1 00:20:58.123 --rc genhtml_function_coverage=1 00:20:58.123 --rc genhtml_legend=1 00:20:58.123 --rc geninfo_all_blocks=1 00:20:58.123 --rc geninfo_unexecuted_blocks=1 00:20:58.123 00:20:58.123 ' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.123 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:58.123 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.124 12:35:03 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:21:00.661 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:21:00.661 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:21:00.661 Found net devices under 0000:83:00.0: mlx_0_0 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.661 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:21:00.662 Found net devices under 0000:83:00.1: mlx_0_1 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:00.662 12:35:05 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:00.662 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:00.662 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:21:00.662 altname enp131s0f0np0 00:21:00.662 inet 192.168.100.8/24 scope global mlx_0_0 00:21:00.662 valid_lft forever preferred_lft forever 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:00.662 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:00.662 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:21:00.662 altname enp131s0f1np1 00:21:00.662 inet 192.168.100.9/24 scope global mlx_0_1 00:21:00.662 valid_lft forever preferred_lft forever 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:00.662 192.168.100.9' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:00.662 192.168.100.9' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:00.662 192.168.100.9' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2810187 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2810187 00:21:00.662 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2810187 ']' 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.663 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.663 [2024-11-20 12:35:06.259990] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:00.663 [2024-11-20 12:35:06.260160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.663 [2024-11-20 12:35:06.365069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.921 [2024-11-20 12:35:06.427564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.921 [2024-11-20 12:35:06.427616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.921 [2024-11-20 12:35:06.427633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.921 [2024-11-20 12:35:06.427646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.921 [2024-11-20 12:35:06.427658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.921 [2024-11-20 12:35:06.428918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.921 [2024-11-20 12:35:06.429039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.921 [2024-11-20 12:35:06.429103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.921 [2024-11-20 12:35:06.429106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:00.921 12:35:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:04.213 12:35:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:04.213 12:35:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:04.472 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:21:04.472 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:04.808 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:04.808 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:21:04.808 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:04.808 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:21:04.808 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:21:05.083 [2024-11-20 12:35:10.735769] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:21:05.083 [2024-11-20 12:35:10.764496] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x908760/0x91b0d0) succeed. 00:21:05.083 [2024-11-20 12:35:10.779875] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x90ae00/0x99b140) succeed. 00:21:05.341 12:35:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.600 12:35:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.600 12:35:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.858 12:35:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.858 12:35:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:06.424 12:35:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:06.682 [2024-11-20 12:35:12.228169] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:06.682 12:35:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:06.941 12:35:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:21:06.941 12:35:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:21:06.941 12:35:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:06.941 12:35:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:21:08.316 Initializing NVMe Controllers 00:21:08.316 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:21:08.316 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:21:08.316 Initialization complete. Launching workers. 00:21:08.316 ======================================================== 00:21:08.316 Latency(us) 00:21:08.316 Device Information : IOPS MiB/s Average min max 00:21:08.316 PCIE (0000:82:00.0) NSID 1 from core 0: 64978.03 253.82 491.78 37.35 5410.68 00:21:08.316 ======================================================== 00:21:08.317 Total : 64978.03 253.82 491.78 37.35 5410.68 00:21:08.317 00:21:08.317 12:35:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:11.599 Initializing NVMe Controllers 00:21:11.599 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.599 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.599 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:11.599 Initialization complete. Launching workers. 00:21:11.599 ======================================================== 00:21:11.599 Latency(us) 00:21:11.599 Device Information : IOPS MiB/s Average min max 00:21:11.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4841.00 18.91 206.08 80.58 4131.57 00:21:11.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3953.00 15.44 251.43 102.81 4151.72 00:21:11.599 ======================================================== 00:21:11.599 Total : 8793.99 34.35 226.46 80.58 4151.72 00:21:11.599 00:21:11.599 12:35:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:15.785 Initializing NVMe Controllers 00:21:15.785 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.785 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.785 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:15.785 Initialization complete. Launching workers. 00:21:15.785 ======================================================== 00:21:15.785 Latency(us) 00:21:15.785 Device Information : IOPS MiB/s Average min max 00:21:15.785 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11653.89 45.52 2755.82 702.59 6345.62 00:21:15.785 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.62 15.75 7965.19 5829.15 10087.23 00:21:15.785 ======================================================== 00:21:15.785 Total : 15685.51 61.27 4094.78 702.59 10087.23 00:21:15.785 00:21:15.785 12:35:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:21:15.785 12:35:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:19.974 Initializing NVMe Controllers 00:21:19.974 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.974 Controller IO queue size 128, less than required. 00:21:19.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.974 Controller IO queue size 128, less than required. 00:21:19.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.974 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.974 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.974 Initialization complete. Launching workers. 00:21:19.974 ======================================================== 00:21:19.974 Latency(us) 00:21:19.974 Device Information : IOPS MiB/s Average min max 00:21:19.974 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2993.37 748.34 42742.85 23146.01 99171.29 00:21:19.974 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3088.33 772.08 40972.07 23157.43 69103.06 00:21:19.974 ======================================================== 00:21:19.974 Total : 6081.70 1520.43 41843.64 23146.01 99171.29 00:21:19.974 00:21:19.974 12:35:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:21:19.974 No valid NVMe controllers or AIO or URING devices found 00:21:19.974 Initializing NVMe Controllers 00:21:19.974 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.974 Controller IO queue size 128, less than required. 00:21:19.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.974 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:19.974 Controller IO queue size 128, less than required. 00:21:19.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.974 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:19.974 WARNING: Some requested NVMe devices were skipped 00:21:20.233 12:35:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:21:24.422 Initializing NVMe Controllers 00:21:24.422 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.422 Controller IO queue size 128, less than required. 00:21:24.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.422 Controller IO queue size 128, less than required. 00:21:24.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:24.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:24.422 Initialization complete. Launching workers. 00:21:24.422 00:21:24.422 ==================== 00:21:24.422 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:24.422 RDMA transport: 00:21:24.422 dev name: mlx5_0 00:21:24.422 polls: 182223 00:21:24.422 idle_polls: 179918 00:21:24.422 completions: 31750 00:21:24.422 queued_requests: 1 00:21:24.422 total_send_wrs: 15875 00:21:24.422 send_doorbell_updates: 2018 00:21:24.422 total_recv_wrs: 16002 00:21:24.422 recv_doorbell_updates: 2021 00:21:24.422 --------------------------------- 00:21:24.422 00:21:24.422 ==================== 00:21:24.422 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:24.422 RDMA transport: 00:21:24.422 dev name: mlx5_0 00:21:24.422 polls: 182167 00:21:24.422 idle_polls: 181885 00:21:24.422 completions: 15066 00:21:24.422 queued_requests: 1 00:21:24.422 total_send_wrs: 7533 00:21:24.422 send_doorbell_updates: 254 00:21:24.422 total_recv_wrs: 7660 00:21:24.422 recv_doorbell_updates: 255 00:21:24.422 --------------------------------- 00:21:24.422 ======================================================== 00:21:24.422 Latency(us) 00:21:24.422 Device Information : IOPS MiB/s Average min max 00:21:24.422 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3968.50 992.12 32287.12 16099.68 79129.82 00:21:24.422 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1883.00 470.75 67701.25 39808.71 100561.93 00:21:24.422 ======================================================== 00:21:24.422 Total : 5851.49 1462.87 43683.31 16099.68 100561.93 00:21:24.422 00:21:24.680 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:24.680 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:24.939 rmmod nvme_rdma 00:21:24.939 rmmod nvme_fabrics 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2810187 ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2810187 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2810187 ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2810187 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2810187 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2810187' 00:21:24.939 killing process with pid 2810187 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2810187 00:21:24.939 12:35:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2810187 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:26.840 00:21:26.840 real 0m28.847s 00:21:26.840 user 1m45.708s 00:21:26.840 sys 0m3.317s 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:26.840 ************************************ 00:21:26.840 END TEST nvmf_perf 00:21:26.840 ************************************ 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.840 ************************************ 00:21:26.840 START TEST nvmf_fio_host 00:21:26.840 ************************************ 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:26.840 * Looking for test storage... 00:21:26.840 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.840 --rc genhtml_branch_coverage=1 00:21:26.840 --rc genhtml_function_coverage=1 00:21:26.840 --rc genhtml_legend=1 00:21:26.840 --rc geninfo_all_blocks=1 00:21:26.840 --rc geninfo_unexecuted_blocks=1 00:21:26.840 00:21:26.840 ' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.840 --rc genhtml_branch_coverage=1 00:21:26.840 --rc genhtml_function_coverage=1 00:21:26.840 --rc genhtml_legend=1 00:21:26.840 --rc geninfo_all_blocks=1 00:21:26.840 --rc geninfo_unexecuted_blocks=1 00:21:26.840 00:21:26.840 ' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.840 --rc genhtml_branch_coverage=1 00:21:26.840 --rc genhtml_function_coverage=1 00:21:26.840 --rc genhtml_legend=1 00:21:26.840 --rc geninfo_all_blocks=1 00:21:26.840 --rc geninfo_unexecuted_blocks=1 00:21:26.840 00:21:26.840 ' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.840 --rc genhtml_branch_coverage=1 00:21:26.840 --rc genhtml_function_coverage=1 00:21:26.840 --rc genhtml_legend=1 00:21:26.840 --rc geninfo_all_blocks=1 00:21:26.840 --rc geninfo_unexecuted_blocks=1 00:21:26.840 00:21:26.840 ' 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.840 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.841 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.841 12:35:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:21:29.380 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:21:29.380 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:21:29.380 Found net devices under 0000:83:00.0: mlx_0_0 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:21:29.380 Found net devices under 0000:83:00.1: mlx_0_1 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:29.380 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:29.381 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.381 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:21:29.381 altname enp131s0f0np0 00:21:29.381 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.381 valid_lft forever preferred_lft forever 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:29.381 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.381 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:21:29.381 altname enp131s0f1np1 00:21:29.381 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.381 valid_lft forever preferred_lft forever 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.381 192.168.100.9' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:29.381 192.168.100.9' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:29.381 192.168.100.9' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2813824 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2813824 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2813824 ']' 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.381 12:35:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.381 [2024-11-20 12:35:34.966207] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:29.381 [2024-11-20 12:35:34.966301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.381 [2024-11-20 12:35:35.038823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.381 [2024-11-20 12:35:35.103626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.381 [2024-11-20 12:35:35.103696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.381 [2024-11-20 12:35:35.103712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.381 [2024-11-20 12:35:35.103725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.381 [2024-11-20 12:35:35.103736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.381 [2024-11-20 12:35:35.107503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.381 [2024-11-20 12:35:35.107613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.381 [2024-11-20 12:35:35.107694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.381 [2024-11-20 12:35:35.107729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.640 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.640 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:29.640 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:29.897 [2024-11-20 12:35:35.578365] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c40df0/0x1c452e0) succeed. 00:21:29.897 [2024-11-20 12:35:35.594192] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c42480/0x1c86980) succeed. 00:21:30.156 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:30.156 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.156 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 12:35:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:30.414 Malloc1 00:21:30.414 12:35:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.980 12:35:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.239 12:35:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:31.496 [2024-11-20 12:35:37.128778] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:31.496 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:31.754 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:32.012 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:32.012 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:32.012 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:32.012 12:35:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:32.013 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:32.013 fio-3.35 00:21:32.013 Starting 1 thread 00:21:34.544 00:21:34.544 test: (groupid=0, jobs=1): err= 0: pid=2814089: Wed Nov 20 12:35:40 2024 00:21:34.544 read: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(89.0MiB/2005msec) 00:21:34.544 slat (nsec): min=2418, max=34667, avg=2574.02, stdev=489.55 00:21:34.544 clat (usec): min=2018, max=10142, avg=5628.12, stdev=182.78 00:21:34.544 lat (usec): min=2030, max=10145, avg=5630.69, stdev=182.69 00:21:34.544 clat percentiles (usec): 00:21:34.544 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 5604], 20.00th=[ 5604], 00:21:34.544 | 30.00th=[ 5604], 40.00th=[ 5604], 50.00th=[ 5604], 60.00th=[ 5604], 00:21:34.544 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5669], 95.00th=[ 5669], 00:21:34.544 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 8455], 99.95th=[ 9896], 00:21:34.544 | 99.99th=[10159] 00:21:34.544 bw ( KiB/s): min=44568, max=46040, per=100.00%, avg=45438.00, stdev=688.04, samples=4 00:21:34.544 iops : min=11142, max=11510, avg=11359.50, stdev=172.01, samples=4 00:21:34.544 write: IOPS=11.3k, BW=44.1MiB/s (46.2MB/s)(88.4MiB/2005msec); 0 zone resets 00:21:34.544 slat (nsec): min=2470, max=18700, avg=2672.81, stdev=577.39 00:21:34.544 clat (usec): min=3296, max=10119, avg=5627.17, stdev=197.66 00:21:34.544 lat (usec): min=3305, max=10122, avg=5629.85, stdev=197.60 00:21:34.544 clat percentiles (usec): 00:21:34.544 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 5604], 20.00th=[ 5604], 00:21:34.544 | 30.00th=[ 5604], 40.00th=[ 5604], 50.00th=[ 5604], 60.00th=[ 5604], 00:21:34.544 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5669], 95.00th=[ 5669], 00:21:34.544 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[ 9896], 00:21:34.544 | 99.99th=[10028] 00:21:34.544 bw ( KiB/s): min=44792, max=45744, per=99.93%, avg=45130.00, stdev=436.20, samples=4 00:21:34.544 iops : min=11198, max=11436, avg=11282.50, stdev=109.05, samples=4 00:21:34.544 lat (msec) : 4=0.08%, 10=99.89%, 20=0.03% 00:21:34.544 cpu : usr=99.35%, sys=0.05%, ctx=21, majf=0, minf=4 00:21:34.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:34.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.544 issued rwts: total=22773,22638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.544 00:21:34.544 Run status group 0 (all jobs): 00:21:34.544 READ: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=89.0MiB (93.3MB), run=2005-2005msec 00:21:34.544 WRITE: bw=44.1MiB/s (46.2MB/s), 44.1MiB/s-44.1MiB/s (46.2MB/s-46.2MB/s), io=88.4MiB (92.7MB), run=2005-2005msec 00:21:34.544 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:34.544 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:34.544 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:34.545 12:35:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:34.802 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:34.802 fio-3.35 00:21:34.802 Starting 1 thread 00:21:37.333 00:21:37.333 test: (groupid=0, jobs=1): err= 0: pid=2814406: Wed Nov 20 12:35:42 2024 00:21:37.333 read: IOPS=9338, BW=146MiB/s (153MB/s)(289MiB/1983msec) 00:21:37.333 slat (nsec): min=3867, max=52201, avg=4392.89, stdev=1231.00 00:21:37.333 clat (usec): min=707, max=11952, avg=2703.93, stdev=2172.96 00:21:37.333 lat (usec): min=711, max=11956, avg=2708.32, stdev=2173.37 00:21:37.333 clat percentiles (usec): 00:21:37.333 | 1.00th=[ 1090], 5.00th=[ 1237], 10.00th=[ 1336], 20.00th=[ 1467], 00:21:37.333 | 30.00th=[ 1582], 40.00th=[ 1713], 50.00th=[ 1893], 60.00th=[ 2073], 00:21:37.333 | 70.00th=[ 2278], 80.00th=[ 2606], 90.00th=[ 7570], 95.00th=[ 7898], 00:21:37.333 | 99.00th=[10159], 99.50th=[10945], 99.90th=[11600], 99.95th=[11731], 00:21:37.333 | 99.99th=[11863] 00:21:37.333 bw ( KiB/s): min=66432, max=85824, per=49.26%, avg=73600.00, stdev=8582.17, samples=4 00:21:37.333 iops : min= 4152, max= 5364, avg=4600.00, stdev=536.39, samples=4 00:21:37.333 write: IOPS=5332, BW=83.3MiB/s (87.4MB/s)(150MiB/1798msec); 0 zone resets 00:21:37.333 slat (nsec): min=40858, max=98795, avg=43001.18, stdev=5736.77 00:21:37.333 clat (usec): min=6490, max=28745, avg=19572.87, stdev=2947.31 00:21:37.333 lat (usec): min=6531, max=28786, avg=19615.87, stdev=2946.16 00:21:37.333 clat percentiles (usec): 00:21:37.333 | 1.00th=[10290], 5.00th=[15270], 10.00th=[16188], 20.00th=[17433], 00:21:37.333 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19530], 60.00th=[20055], 00:21:37.333 | 70.00th=[20841], 80.00th=[21627], 90.00th=[23200], 95.00th=[24249], 00:21:37.333 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28443], 99.95th=[28705], 00:21:37.333 | 99.99th=[28705] 00:21:37.333 bw ( KiB/s): min=69760, max=86016, per=88.79%, avg=75752.00, stdev=7211.65, samples=4 00:21:37.333 iops : min= 4360, max= 5376, avg=4734.50, stdev=450.73, samples=4 00:21:37.333 lat (usec) : 750=0.01%, 1000=0.14% 00:21:37.333 lat (msec) : 2=36.73%, 4=19.03%, 10=9.54%, 20=20.05%, 50=14.52% 00:21:37.333 cpu : usr=97.51%, sys=0.70%, ctx=143, majf=0, minf=17 00:21:37.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:37.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:37.334 issued rwts: total=18518,9587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:37.334 00:21:37.334 Run status group 0 (all jobs): 00:21:37.334 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=289MiB (303MB), run=1983-1983msec 00:21:37.334 WRITE: bw=83.3MiB/s (87.4MB/s), 83.3MiB/s-83.3MiB/s (87.4MB/s-87.4MB/s), io=150MiB (157MB), run=1798-1798msec 00:21:37.334 12:35:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:37.334 rmmod nvme_rdma 00:21:37.334 rmmod nvme_fabrics 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2813824 ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2813824 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2813824 ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2813824 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.334 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813824 00:21:37.592 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.592 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.592 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813824' 00:21:37.592 killing process with pid 2813824 00:21:37.592 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2813824 00:21:37.592 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2813824 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:37.851 00:21:37.851 real 0m11.067s 00:21:37.851 user 0m40.166s 00:21:37.851 sys 0m2.744s 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.851 ************************************ 00:21:37.851 END TEST nvmf_fio_host 00:21:37.851 ************************************ 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.851 ************************************ 00:21:37.851 START TEST nvmf_failover 00:21:37.851 ************************************ 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:37.851 * Looking for test storage... 00:21:37.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.851 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:38.112 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.113 --rc genhtml_branch_coverage=1 00:21:38.113 --rc genhtml_function_coverage=1 00:21:38.113 --rc genhtml_legend=1 00:21:38.113 --rc geninfo_all_blocks=1 00:21:38.113 --rc geninfo_unexecuted_blocks=1 00:21:38.113 00:21:38.113 ' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.113 --rc genhtml_branch_coverage=1 00:21:38.113 --rc genhtml_function_coverage=1 00:21:38.113 --rc genhtml_legend=1 00:21:38.113 --rc geninfo_all_blocks=1 00:21:38.113 --rc geninfo_unexecuted_blocks=1 00:21:38.113 00:21:38.113 ' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.113 --rc genhtml_branch_coverage=1 00:21:38.113 --rc genhtml_function_coverage=1 00:21:38.113 --rc genhtml_legend=1 00:21:38.113 --rc geninfo_all_blocks=1 00:21:38.113 --rc geninfo_unexecuted_blocks=1 00:21:38.113 00:21:38.113 ' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.113 --rc genhtml_branch_coverage=1 00:21:38.113 --rc genhtml_function_coverage=1 00:21:38.113 --rc genhtml_legend=1 00:21:38.113 --rc geninfo_all_blocks=1 00:21:38.113 --rc geninfo_unexecuted_blocks=1 00:21:38.113 00:21:38.113 ' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.113 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.113 12:35:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:21:40.020 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:40.020 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:40.281 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:40.281 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:40.281 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:21:40.282 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:21:40.282 Found net devices under 0000:83:00.0: mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:21:40.282 Found net devices under 0000:83:00.1: mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:40.282 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:40.282 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:21:40.282 altname enp131s0f0np0 00:21:40.282 inet 192.168.100.8/24 scope global mlx_0_0 00:21:40.282 valid_lft forever preferred_lft forever 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:40.282 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:40.282 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:21:40.282 altname enp131s0f1np1 00:21:40.282 inet 192.168.100.9/24 scope global mlx_0_1 00:21:40.282 valid_lft forever preferred_lft forever 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:40.282 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:40.283 192.168.100.9' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:40.283 192.168.100.9' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:40.283 192.168.100.9' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2815980 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2815980 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2815980 ']' 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.283 12:35:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:40.543 [2024-11-20 12:35:46.050551] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:40.543 [2024-11-20 12:35:46.050694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.543 [2024-11-20 12:35:46.198980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.803 [2024-11-20 12:35:46.309302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.803 [2024-11-20 12:35:46.309394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.803 [2024-11-20 12:35:46.309427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.803 [2024-11-20 12:35:46.309464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.803 [2024-11-20 12:35:46.309476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.803 [2024-11-20 12:35:46.311635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.803 [2024-11-20 12:35:46.311724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.803 [2024-11-20 12:35:46.311768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.370 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.370 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:41.370 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.370 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.370 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.628 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.628 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:41.887 [2024-11-20 12:35:47.478940] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fea590/0x1feea80) succeed. 00:21:41.887 [2024-11-20 12:35:47.493686] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1febb80/0x2030120) succeed. 00:21:41.887 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:42.454 Malloc0 00:21:42.454 12:35:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.712 12:35:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.971 12:35:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:43.229 [2024-11-20 12:35:48.965454] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:43.229 12:35:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:43.799 [2024-11-20 12:35:49.298374] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:43.799 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:44.074 [2024-11-20 12:35:49.635590] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2816300 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2816300 /var/tmp/bdevperf.sock 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2816300 ']' 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.074 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.332 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.332 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:44.332 12:35:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:44.898 NVMe0n1 00:21:44.898 12:35:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:45.157 00:21:45.157 12:35:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2816401 00:21:45.157 12:35:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.157 12:35:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:46.091 12:35:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:46.714 12:35:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:50.027 12:35:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:50.027 00:21:50.027 12:35:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:50.284 12:35:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:53.559 12:35:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:53.559 [2024-11-20 12:35:59.210409] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:53.559 12:35:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:54.493 12:36:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:55.059 12:36:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2816401 00:22:00.324 { 00:22:00.324 "results": [ 00:22:00.324 { 00:22:00.324 "job": "NVMe0n1", 00:22:00.324 "core_mask": "0x1", 00:22:00.324 "workload": "verify", 00:22:00.324 "status": "finished", 00:22:00.324 "verify_range": { 00:22:00.324 "start": 0, 00:22:00.324 "length": 16384 00:22:00.324 }, 00:22:00.324 "queue_depth": 128, 00:22:00.324 "io_size": 4096, 00:22:00.324 "runtime": 15.009484, 00:22:00.324 "iops": 9118.501342218027, 00:22:00.324 "mibps": 35.61914586803917, 00:22:00.324 "io_failed": 3963, 00:22:00.324 "io_timeout": 0, 00:22:00.324 "avg_latency_us": 13608.86598705162, 00:22:00.324 "min_latency_us": 588.6103703703703, 00:22:00.324 "max_latency_us": 1050129.4459259259 00:22:00.324 } 00:22:00.324 ], 00:22:00.324 "core_count": 1 00:22:00.324 } 00:22:00.324 12:36:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2816300 00:22:00.324 12:36:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2816300 ']' 00:22:00.324 12:36:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2816300 00:22:00.324 12:36:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816300 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816300' 00:22:00.324 killing process with pid 2816300 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2816300 00:22:00.324 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2816300 00:22:00.592 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:00.592 [2024-11-20 12:35:49.711714] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:00.592 [2024-11-20 12:35:49.711838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816300 ] 00:22:00.592 [2024-11-20 12:35:49.784886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.592 [2024-11-20 12:35:49.847507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.592 Running I/O for 15 seconds... 00:22:00.592 11520.00 IOPS, 45.00 MiB/s [2024-11-20T11:36:06.358Z] 6803.50 IOPS, 26.58 MiB/s [2024-11-20T11:36:06.358Z] [2024-11-20 12:35:53.131845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.592 [2024-11-20 12:35:53.131898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.131918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.592 [2024-11-20 12:35:53.131933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.131950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.592 [2024-11-20 12:35:53.131964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.131980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.592 [2024-11-20 12:35:53.131995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.592 [2024-11-20 12:35:53.134119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:00.592 [2024-11-20 12:35:53.134140] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:00.592 [2024-11-20 12:35:53.134155] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:00.592 [2024-11-20 12:35:53.134185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.134927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.134982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.135002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.135056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.135076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.135130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183600 00:22:00.592 [2024-11-20 12:35:53.135150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.592 [2024-11-20 12:35:53.135204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.135962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.136987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.593 [2024-11-20 12:35:53.137947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183600 00:22:00.593 [2024-11-20 12:35:53.137973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.138937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.138957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.139938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.139992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.594 [2024-11-20 12:35:53.140678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183600 00:22:00.594 [2024-11-20 12:35:53.140699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.140754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183600 00:22:00.595 [2024-11-20 12:35:53.140774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.140827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x183600 00:22:00.595 [2024-11-20 12:35:53.140850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x183600 00:22:00.595 [2024-11-20 12:35:53.140925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.140979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.140999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.141953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.142959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.595 [2024-11-20 12:35:53.143463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.595 [2024-11-20 12:35:53.143491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:53.143547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:53.143570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:53.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:53.143641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:53.143694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:53.143713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:53.169682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.596 [2024-11-20 12:35:53.169709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.596 [2024-11-20 12:35:53.169724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109872 len:8 PRP1 0x0 PRP2 0x0 00:22:00.596 [2024-11-20 12:35:53.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:53.169829] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:00.596 [2024-11-20 12:35:53.169889] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:00.596 [2024-11-20 12:35:53.173918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:00.596 [2024-11-20 12:35:53.220798] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:00.596 7318.67 IOPS, 28.59 MiB/s [2024-11-20T11:36:06.362Z] 8393.75 IOPS, 32.79 MiB/s [2024-11-20T11:36:06.362Z] 8837.00 IOPS, 34.52 MiB/s [2024-11-20T11:36:06.362Z] [2024-11-20 12:35:56.879300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.879353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.879415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.596 [2024-11-20 12:35:56.879982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.879999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.880032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.880065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.880098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.880131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.596 [2024-11-20 12:35:56.880164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183900 00:22:00.596 [2024-11-20 12:35:56.880179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.880972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.880987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.597 [2024-11-20 12:35:56.881089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.597 [2024-11-20 12:35:56.881122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.597 [2024-11-20 12:35:56.881154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.597 [2024-11-20 12:35:56.881371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x183900 00:22:00.597 [2024-11-20 12:35:56.881387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x183900 00:22:00.598 [2024-11-20 12:35:56.881419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183900 00:22:00.598 [2024-11-20 12:35:56.881623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x183900 00:22:00.598 [2024-11-20 12:35:56.881656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x183900 00:22:00.598 [2024-11-20 12:35:56.881689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.881977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.881992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.598 [2024-11-20 12:35:56.882492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.598 [2024-11-20 12:35:56.882511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183900 00:22:00.598 [2024-11-20 12:35:56.882527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.882970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.882988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183900 00:22:00.599 [2024-11-20 12:35:56.883580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.883598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.599 [2024-11-20 12:35:56.883613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.885716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.599 [2024-11-20 12:35:56.885747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.599 [2024-11-20 12:35:56.885763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103024 len:8 PRP1 0x0 PRP2 0x0 00:22:00.599 [2024-11-20 12:35:56.885779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.599 [2024-11-20 12:35:56.885834] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:22:00.599 [2024-11-20 12:35:56.885857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:00.599 [2024-11-20 12:35:56.889968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:00.599 [2024-11-20 12:35:56.914371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.599 7364.17 IOPS, 28.77 MiB/s [2024-11-20T11:36:06.365Z] [2024-11-20 12:35:56.968390] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:00.599 7956.71 IOPS, 31.08 MiB/s [2024-11-20T11:36:06.365Z] 8431.12 IOPS, 32.93 MiB/s [2024-11-20T11:36:06.365Z] 8801.67 IOPS, 34.38 MiB/s [2024-11-20T11:36:06.366Z] 8596.00 IOPS, 33.58 MiB/s [2024-11-20T11:36:06.366Z] [2024-11-20 12:36:01.555467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.555535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.555934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.555969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.555987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183600 00:22:00.600 [2024-11-20 12:36:01.556475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.556516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.556549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.556614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.600 [2024-11-20 12:36:01.556631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.600 [2024-11-20 12:36:01.556646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.556714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.556749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.556976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.556993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.557040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.557073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.557111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.557143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.601 [2024-11-20 12:36:01.557177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183600 00:22:00.601 [2024-11-20 12:36:01.557692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.601 [2024-11-20 12:36:01.557710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.557956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.557973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183600 00:22:00.602 [2024-11-20 12:36:01.557989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183600 00:22:00.602 [2024-11-20 12:36:01.558022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183600 00:22:00.602 [2024-11-20 12:36:01.558054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183600 00:22:00.602 [2024-11-20 12:36:01.558087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183600 00:22:00.602 [2024-11-20 12:36:01.558121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.602 [2024-11-20 12:36:01.558854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.602 [2024-11-20 12:36:01.558869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.558886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.558901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.558919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.558952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183600 00:22:00.603 [2024-11-20 12:36:01.558968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.558986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183600 00:22:00.603 [2024-11-20 12:36:01.559001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183600 00:22:00.603 [2024-11-20 12:36:01.559034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183600 00:22:00.603 [2024-11-20 12:36:01.559067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183600 00:22:00.603 [2024-11-20 12:36:01.559100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.559836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.603 [2024-11-20 12:36:01.559852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:edd29000 sqhd:8250 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.561826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.603 [2024-11-20 12:36:01.561858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.603 [2024-11-20 12:36:01.561872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54616 len:8 PRP1 0x0 PRP2 0x0 00:22:00.603 [2024-11-20 12:36:01.561887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.603 [2024-11-20 12:36:01.561937] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:22:00.603 [2024-11-20 12:36:01.561961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:00.603 [2024-11-20 12:36:01.566018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:00.603 [2024-11-20 12:36:01.591906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.603 [2024-11-20 12:36:01.642202] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:00.603 8157.82 IOPS, 31.87 MiB/s [2024-11-20T11:36:06.369Z] 8459.33 IOPS, 33.04 MiB/s [2024-11-20T11:36:06.369Z] 8714.46 IOPS, 34.04 MiB/s [2024-11-20T11:36:06.369Z] 8933.29 IOPS, 34.90 MiB/s [2024-11-20T11:36:06.369Z] 9120.73 IOPS, 35.63 MiB/s 00:22:00.603 Latency(us) 00:22:00.603 [2024-11-20T11:36:06.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.603 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:00.603 Verification LBA range: start 0x0 length 0x4000 00:22:00.603 NVMe0n1 : 15.01 9118.50 35.62 264.03 0.00 13608.87 588.61 1050129.45 00:22:00.603 [2024-11-20T11:36:06.369Z] =================================================================================================================== 00:22:00.603 [2024-11-20T11:36:06.369Z] Total : 9118.50 35.62 264.03 0.00 13608.87 588.61 1050129.45 00:22:00.603 Received shutdown signal, test time was about 15.000000 seconds 00:22:00.603 00:22:00.603 Latency(us) 00:22:00.603 [2024-11-20T11:36:06.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.603 [2024-11-20T11:36:06.370Z] =================================================================================================================== 00:22:00.604 [2024-11-20T11:36:06.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2818089 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2818089 /var/tmp/bdevperf.sock 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2818089 ']' 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.604 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:00.862 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.862 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:00.862 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:01.119 [2024-11-20 12:36:06.865055] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:01.377 12:36:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:01.635 [2024-11-20 12:36:07.194259] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:01.635 12:36:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:01.893 NVMe0n1 00:22:01.893 12:36:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:02.460 00:22:02.460 12:36:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:02.718 00:22:02.718 12:36:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.718 12:36:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:02.986 12:36:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.560 12:36:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:06.841 12:36:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.841 12:36:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:06.841 12:36:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2818760 00:22:06.841 12:36:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.841 12:36:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2818760 00:22:08.218 { 00:22:08.218 "results": [ 00:22:08.218 { 00:22:08.218 "job": "NVMe0n1", 00:22:08.218 "core_mask": "0x1", 00:22:08.218 "workload": "verify", 00:22:08.218 "status": "finished", 00:22:08.218 "verify_range": { 00:22:08.218 "start": 0, 00:22:08.218 "length": 16384 00:22:08.218 }, 00:22:08.218 "queue_depth": 128, 00:22:08.218 "io_size": 4096, 00:22:08.218 "runtime": 1.010506, 00:22:08.218 "iops": 11526.898405353357, 00:22:08.218 "mibps": 45.02694689591155, 00:22:08.218 "io_failed": 0, 00:22:08.218 "io_timeout": 0, 00:22:08.218 "avg_latency_us": 11035.773447293448, 00:22:08.218 "min_latency_us": 4344.794074074074, 00:22:08.218 "max_latency_us": 20680.248888888887 00:22:08.218 } 00:22:08.218 ], 00:22:08.218 "core_count": 1 00:22:08.218 } 00:22:08.218 12:36:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.218 [2024-11-20 12:36:06.271175] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:08.218 [2024-11-20 12:36:06.271291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818089 ] 00:22:08.218 [2024-11-20 12:36:06.343277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.218 [2024-11-20 12:36:06.405449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.218 [2024-11-20 12:36:09.054538] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:08.218 [2024-11-20 12:36:09.055221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:08.218 [2024-11-20 12:36:09.055294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:08.218 [2024-11-20 12:36:09.087650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:22:08.218 [2024-11-20 12:36:09.107173] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:08.218 Running I/O for 1 seconds... 00:22:08.218 11520.00 IOPS, 45.00 MiB/s 00:22:08.218 Latency(us) 00:22:08.218 [2024-11-20T11:36:13.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.218 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:08.218 Verification LBA range: start 0x0 length 0x4000 00:22:08.218 NVMe0n1 : 1.01 11526.90 45.03 0.00 0.00 11035.77 4344.79 20680.25 00:22:08.218 [2024-11-20T11:36:13.984Z] =================================================================================================================== 00:22:08.218 [2024-11-20T11:36:13.984Z] Total : 11526.90 45.03 0.00 0.00 11035.77 4344.79 20680.25 00:22:08.218 12:36:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.218 12:36:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:08.218 12:36:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.477 12:36:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.477 12:36:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:09.043 12:36:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.301 12:36:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:12.584 12:36:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.584 12:36:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2818089 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2818089 ']' 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2818089 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818089 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818089' 00:22:12.584 killing process with pid 2818089 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2818089 00:22:12.584 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2818089 00:22:12.843 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:12.843 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.101 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:13.101 rmmod nvme_rdma 00:22:13.101 rmmod nvme_fabrics 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2815980 ']' 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2815980 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2815980 ']' 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2815980 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815980 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815980' 00:22:13.359 killing process with pid 2815980 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2815980 00:22:13.359 12:36:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2815980 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:13.617 00:22:13.617 real 0m35.786s 00:22:13.617 user 2m16.082s 00:22:13.617 sys 0m4.071s 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:13.617 ************************************ 00:22:13.617 END TEST nvmf_failover 00:22:13.617 ************************************ 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.617 ************************************ 00:22:13.617 START TEST nvmf_host_discovery 00:22:13.617 ************************************ 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:13.617 * Looking for test storage... 00:22:13.617 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.617 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.877 --rc genhtml_branch_coverage=1 00:22:13.877 --rc genhtml_function_coverage=1 00:22:13.877 --rc genhtml_legend=1 00:22:13.877 --rc geninfo_all_blocks=1 00:22:13.877 --rc geninfo_unexecuted_blocks=1 00:22:13.877 00:22:13.877 ' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.877 --rc genhtml_branch_coverage=1 00:22:13.877 --rc genhtml_function_coverage=1 00:22:13.877 --rc genhtml_legend=1 00:22:13.877 --rc geninfo_all_blocks=1 00:22:13.877 --rc geninfo_unexecuted_blocks=1 00:22:13.877 00:22:13.877 ' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.877 --rc genhtml_branch_coverage=1 00:22:13.877 --rc genhtml_function_coverage=1 00:22:13.877 --rc genhtml_legend=1 00:22:13.877 --rc geninfo_all_blocks=1 00:22:13.877 --rc geninfo_unexecuted_blocks=1 00:22:13.877 00:22:13.877 ' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.877 --rc genhtml_branch_coverage=1 00:22:13.877 --rc genhtml_function_coverage=1 00:22:13.877 --rc genhtml_legend=1 00:22:13.877 --rc geninfo_all_blocks=1 00:22:13.877 --rc geninfo_unexecuted_blocks=1 00:22:13.877 00:22:13.877 ' 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.877 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.878 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:13.878 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:22:13.878 00:22:13.878 real 0m0.206s 00:22:13.878 user 0m0.134s 00:22:13.878 sys 0m0.082s 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.878 ************************************ 00:22:13.878 END TEST nvmf_host_discovery 00:22:13.878 ************************************ 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.878 ************************************ 00:22:13.878 START TEST nvmf_host_multipath_status 00:22:13.878 ************************************ 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:13.878 * Looking for test storage... 00:22:13.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.878 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.138 --rc genhtml_branch_coverage=1 00:22:14.138 --rc genhtml_function_coverage=1 00:22:14.138 --rc genhtml_legend=1 00:22:14.138 --rc geninfo_all_blocks=1 00:22:14.138 --rc geninfo_unexecuted_blocks=1 00:22:14.138 00:22:14.138 ' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.138 --rc genhtml_branch_coverage=1 00:22:14.138 --rc genhtml_function_coverage=1 00:22:14.138 --rc genhtml_legend=1 00:22:14.138 --rc geninfo_all_blocks=1 00:22:14.138 --rc geninfo_unexecuted_blocks=1 00:22:14.138 00:22:14.138 ' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.138 --rc genhtml_branch_coverage=1 00:22:14.138 --rc genhtml_function_coverage=1 00:22:14.138 --rc genhtml_legend=1 00:22:14.138 --rc geninfo_all_blocks=1 00:22:14.138 --rc geninfo_unexecuted_blocks=1 00:22:14.138 00:22:14.138 ' 00:22:14.138 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.138 --rc genhtml_branch_coverage=1 00:22:14.138 --rc genhtml_function_coverage=1 00:22:14.138 --rc genhtml_legend=1 00:22:14.138 --rc geninfo_all_blocks=1 00:22:14.138 --rc geninfo_unexecuted_blocks=1 00:22:14.138 00:22:14.138 ' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.139 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.139 12:36:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:22:16.676 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:22:16.676 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:22:16.676 Found net devices under 0000:83:00.0: mlx_0_0 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.676 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:22:16.677 Found net devices under 0000:83:00.1: mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:16.677 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:16.677 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:22:16.677 altname enp131s0f0np0 00:22:16.677 inet 192.168.100.8/24 scope global mlx_0_0 00:22:16.677 valid_lft forever preferred_lft forever 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:16.677 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:16.677 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:22:16.677 altname enp131s0f1np1 00:22:16.677 inet 192.168.100.9/24 scope global mlx_0_1 00:22:16.677 valid_lft forever preferred_lft forever 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:16.677 192.168.100.9' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:16.677 192.168.100.9' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:16.677 192.168.100.9' 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:22:16.677 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:16.678 12:36:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2820699 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2820699 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2820699 ']' 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.678 [2024-11-20 12:36:22.072168] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:16.678 [2024-11-20 12:36:22.072280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.678 [2024-11-20 12:36:22.145216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.678 [2024-11-20 12:36:22.208018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.678 [2024-11-20 12:36:22.208082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.678 [2024-11-20 12:36:22.208098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.678 [2024-11-20 12:36:22.208111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.678 [2024-11-20 12:36:22.208122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.678 [2024-11-20 12:36:22.212532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.678 [2024-11-20 12:36:22.212577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2820699 00:22:16.678 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:17.245 [2024-11-20 12:36:22.764101] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfc7560/0xfcba50) succeed. 00:22:17.245 [2024-11-20 12:36:22.778263] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfc8ab0/0x100d0f0) succeed. 00:22:17.245 12:36:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:17.502 Malloc0 00:22:17.503 12:36:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:18.069 12:36:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.327 12:36:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:18.586 [2024-11-20 12:36:24.206721] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:18.586 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:18.845 [2024-11-20 12:36:24.539578] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:18.845 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2821000 00:22:18.845 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2821000 /var/tmp/bdevperf.sock 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2821000 ']' 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.846 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:19.413 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.413 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:19.413 12:36:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:19.671 12:36:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:19.930 Nvme0n1 00:22:19.930 12:36:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:20.496 Nvme0n1 00:22:20.496 12:36:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:20.496 12:36:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:22.398 12:36:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:22.398 12:36:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:22.656 12:36:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:23.223 12:36:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:24.158 12:36:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:24.158 12:36:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.158 12:36:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.158 12:36:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.416 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.416 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:24.416 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.416 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.743 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.743 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.743 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.743 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.049 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.049 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.049 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.049 12:36:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.308 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.308 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.308 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.308 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.876 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.876 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.876 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.876 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.134 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.134 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:26.134 12:36:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:26.392 12:36:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:26.651 12:36:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.026 12:36:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.593 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.593 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.593 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.593 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.851 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.851 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.851 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.851 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.109 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.109 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.109 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.109 12:36:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.367 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.367 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.367 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.367 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.939 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.939 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:29.939 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:30.200 12:36:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:30.458 12:36:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:31.392 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:31.392 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.392 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.392 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.959 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.959 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.959 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.959 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:32.217 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.217 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:32.217 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.217 12:36:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:32.475 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.475 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:32.475 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.475 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.733 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.734 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.734 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.734 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:33.300 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.300 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:33.300 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.300 12:36:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:33.557 12:36:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.558 12:36:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:33.558 12:36:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:33.816 12:36:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:34.074 12:36:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:35.008 12:36:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:35.008 12:36:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:35.008 12:36:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.008 12:36:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:35.574 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.574 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:35.574 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.574 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:35.833 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.833 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:35.833 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.833 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.091 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.091 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.091 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.091 12:36:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.657 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.657 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:36.657 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.657 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:36.915 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.915 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:36.915 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.915 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:37.173 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.173 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:37.173 12:36:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:37.430 12:36:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:37.688 12:36:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.060 12:36:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.627 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.627 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.627 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.627 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.886 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.886 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.886 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.886 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:40.144 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.144 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:40.144 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.144 12:36:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:40.403 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.403 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:40.403 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.403 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.970 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.970 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:40.970 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:41.229 12:36:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:41.489 12:36:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:42.429 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:42.429 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:42.429 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.429 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.688 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.688 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:42.688 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.688 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.255 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.255 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.255 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.255 12:36:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.514 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.514 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.514 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.514 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.773 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.773 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:43.773 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.773 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.341 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.341 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.341 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.341 12:36:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.600 12:36:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.600 12:36:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:44.858 12:36:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:44.858 12:36:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:45.116 12:36:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:45.683 12:36:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:46.618 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:46.618 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:46.618 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.618 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.876 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.876 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:46.876 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.876 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:47.134 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.134 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:47.134 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.134 12:36:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.700 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.700 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.700 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.700 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.958 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.958 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.958 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.958 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.217 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.217 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:48.217 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.217 12:36:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:48.475 12:36:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.475 12:36:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:48.475 12:36:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:49.047 12:36:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:49.305 12:36:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:50.241 12:36:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:50.241 12:36:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:50.241 12:36:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.241 12:36:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.499 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.499 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:50.499 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.499 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.758 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.758 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.758 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.758 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.324 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.324 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.324 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.324 12:36:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.583 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.583 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.583 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.583 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.841 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.841 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:51.841 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.841 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.099 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.099 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:52.099 12:36:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:52.665 12:36:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:52.923 12:36:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:53.858 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:53.858 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:53.858 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.858 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.116 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.116 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:54.116 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.116 12:36:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.682 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.682 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.682 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.682 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:54.942 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.942 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:54.942 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.942 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.243 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.243 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.243 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.243 12:37:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.529 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.529 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.529 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.529 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.096 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.096 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:56.096 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:56.355 12:37:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:56.613 12:37:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:57.548 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:57.548 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.548 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.548 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.805 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.805 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.805 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.805 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.373 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.373 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.373 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.373 12:37:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.631 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.631 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.631 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.631 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.889 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.889 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.889 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.889 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:59.147 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.147 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:59.147 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.147 12:37:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2821000 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2821000 ']' 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2821000 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821000 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821000' 00:22:59.714 killing process with pid 2821000 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2821000 00:22:59.714 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2821000 00:22:59.714 { 00:22:59.714 "results": [ 00:22:59.714 { 00:22:59.714 "job": "Nvme0n1", 00:22:59.714 "core_mask": "0x4", 00:22:59.714 "workload": "verify", 00:22:59.714 "status": "terminated", 00:22:59.714 "verify_range": { 00:22:59.714 "start": 0, 00:22:59.714 "length": 16384 00:22:59.714 }, 00:22:59.714 "queue_depth": 128, 00:22:59.714 "io_size": 4096, 00:22:59.714 "runtime": 39.010022, 00:22:59.714 "iops": 10375.949031764196, 00:22:59.714 "mibps": 40.53105090532889, 00:22:59.714 "io_failed": 0, 00:22:59.714 "io_timeout": 0, 00:22:59.714 "avg_latency_us": 12303.199334419282, 00:22:59.714 "min_latency_us": 1480.628148148148, 00:22:59.714 "max_latency_us": 4026531.84 00:22:59.714 } 00:22:59.714 ], 00:22:59.714 "core_count": 1 00:22:59.714 } 00:22:59.979 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2821000 00:22:59.979 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.979 [2024-11-20 12:36:24.614759] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:59.979 [2024-11-20 12:36:24.614880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821000 ] 00:22:59.979 [2024-11-20 12:36:24.687191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.979 [2024-11-20 12:36:24.750053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.979 Running I/O for 90 seconds... 00:22:59.979 11588.00 IOPS, 45.27 MiB/s [2024-11-20T11:37:05.745Z] 11712.00 IOPS, 45.75 MiB/s [2024-11-20T11:37:05.745Z] 11733.33 IOPS, 45.83 MiB/s [2024-11-20T11:37:05.745Z] 11752.00 IOPS, 45.91 MiB/s [2024-11-20T11:37:05.745Z] 11771.40 IOPS, 45.98 MiB/s [2024-11-20T11:37:05.745Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T11:37:05.745Z] 11782.43 IOPS, 46.03 MiB/s [2024-11-20T11:37:05.745Z] 11792.00 IOPS, 46.06 MiB/s [2024-11-20T11:37:05.745Z] 11804.22 IOPS, 46.11 MiB/s [2024-11-20T11:37:05.745Z] 11802.30 IOPS, 46.10 MiB/s [2024-11-20T11:37:05.745Z] 11808.18 IOPS, 46.13 MiB/s [2024-11-20T11:37:05.745Z] 11808.00 IOPS, 46.12 MiB/s [2024-11-20T11:37:05.745Z] 11811.38 IOPS, 46.14 MiB/s [2024-11-20T11:37:05.745Z] 11810.43 IOPS, 46.13 MiB/s [2024-11-20T11:37:05.745Z] 11812.40 IOPS, 46.14 MiB/s [2024-11-20T11:37:05.745Z] 11814.12 IOPS, 46.15 MiB/s [2024-11-20T11:37:05.745Z] [2024-11-20 12:36:43.095478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.095975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.095994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.979 [2024-11-20 12:36:43.096520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.979 [2024-11-20 12:36:43.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.096978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.096994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181100 00:22:59.980 [2024-11-20 12:36:43.097801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.980 [2024-11-20 12:36:43.097835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181100 00:22:59.980 [2024-11-20 12:36:43.097870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.980 [2024-11-20 12:36:43.097890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.097905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.097924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.097940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.097959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.097974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181100 00:22:59.981 [2024-11-20 12:36:43.098859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.098877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.098893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.981 [2024-11-20 12:36:43.100282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.981 [2024-11-20 12:36:43.100300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.100984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.100999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181100 00:22:59.982 [2024-11-20 12:36:43.101280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.982 [2024-11-20 12:36:43.101863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.982 [2024-11-20 12:36:43.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.101906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.101921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.101940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.101974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.101989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.983 [2024-11-20 12:36:43.102971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.983 [2024-11-20 12:36:43.102990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.984 [2024-11-20 12:36:43.103702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.103966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.103982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.104005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.104021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.104040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181100 00:22:59.984 [2024-11-20 12:36:43.104056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.984 [2024-11-20 12:36:43.104075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:36:43.104720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.104980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.104995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.105014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.105030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.105048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.105063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.105081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.105097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:36:43.105517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:36:43.105542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.985 11721.53 IOPS, 45.79 MiB/s [2024-11-20T11:37:05.751Z] 11070.33 IOPS, 43.24 MiB/s [2024-11-20T11:37:05.751Z] 10487.68 IOPS, 40.97 MiB/s [2024-11-20T11:37:05.751Z] 9963.30 IOPS, 38.92 MiB/s [2024-11-20T11:37:05.751Z] 9556.24 IOPS, 37.33 MiB/s [2024-11-20T11:37:05.751Z] 9662.95 IOPS, 37.75 MiB/s [2024-11-20T11:37:05.751Z] 9759.57 IOPS, 38.12 MiB/s [2024-11-20T11:37:05.751Z] 9846.46 IOPS, 38.46 MiB/s [2024-11-20T11:37:05.751Z] 9907.32 IOPS, 38.70 MiB/s [2024-11-20T11:37:05.751Z] 9935.35 IOPS, 38.81 MiB/s [2024-11-20T11:37:05.751Z] 9965.19 IOPS, 38.93 MiB/s [2024-11-20T11:37:05.751Z] 9991.43 IOPS, 39.03 MiB/s [2024-11-20T11:37:05.751Z] 10044.17 IOPS, 39.24 MiB/s [2024-11-20T11:37:05.751Z] 10104.87 IOPS, 39.47 MiB/s [2024-11-20T11:37:05.751Z] 10160.35 IOPS, 39.69 MiB/s [2024-11-20T11:37:05.751Z] 10212.50 IOPS, 39.89 MiB/s [2024-11-20T11:37:05.751Z] 10231.39 IOPS, 39.97 MiB/s [2024-11-20T11:37:05.751Z] 10241.18 IOPS, 40.00 MiB/s [2024-11-20T11:37:05.751Z] 10252.40 IOPS, 40.05 MiB/s [2024-11-20T11:37:05.751Z] [2024-11-20 12:37:02.200511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:37:02.200573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:37:02.200631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181100 00:22:59.985 [2024-11-20 12:37:02.200652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.985 [2024-11-20 12:37:02.200673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.985 [2024-11-20 12:37:02.200690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.200728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.200763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.200799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.200833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.200871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.200890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.200906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.201976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.201992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.202010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.202026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.202044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.202060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.202079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.202094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.202113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181100 00:22:59.986 [2024-11-20 12:37:02.202129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.986 [2024-11-20 12:37:02.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.986 [2024-11-20 12:37:02.202163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.202867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.202903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.202974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.202998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.203051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.203190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.203259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181100 00:22:59.987 [2024-11-20 12:37:02.203559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.987 [2024-11-20 12:37:02.203615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.987 [2024-11-20 12:37:02.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.987 10264.08 IOPS, 40.09 MiB/s [2024-11-20T11:37:05.753Z] 10305.62 IOPS, 40.26 MiB/s [2024-11-20T11:37:05.753Z] 10344.50 IOPS, 40.41 MiB/s [2024-11-20T11:37:05.753Z] Received shutdown signal, test time was about 39.010929 seconds 00:22:59.987 00:22:59.987 Latency(us) 00:22:59.987 [2024-11-20T11:37:05.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.987 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.987 Verification LBA range: start 0x0 length 0x4000 00:22:59.987 Nvme0n1 : 39.01 10375.95 40.53 0.00 0.00 12303.20 1480.63 4026531.84 00:22:59.987 [2024-11-20T11:37:05.753Z] =================================================================================================================== 00:22:59.987 [2024-11-20T11:37:05.753Z] Total : 10375.95 40.53 0.00 0.00 12303.20 1480.63 4026531.84 00:22:59.987 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:00.246 rmmod nvme_rdma 00:23:00.246 rmmod nvme_fabrics 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2820699 ']' 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2820699 00:23:00.246 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2820699 ']' 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2820699 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820699 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820699' 00:23:00.247 killing process with pid 2820699 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2820699 00:23:00.247 12:37:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2820699 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:00.506 00:23:00.506 real 0m46.683s 00:23:00.506 user 2m37.521s 00:23:00.506 sys 0m6.476s 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:00.506 ************************************ 00:23:00.506 END TEST nvmf_host_multipath_status 00:23:00.506 ************************************ 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.506 ************************************ 00:23:00.506 START TEST nvmf_discovery_remove_ifc 00:23:00.506 ************************************ 00:23:00.506 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:00.765 * Looking for test storage... 00:23:00.765 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.765 --rc genhtml_branch_coverage=1 00:23:00.765 --rc genhtml_function_coverage=1 00:23:00.765 --rc genhtml_legend=1 00:23:00.765 --rc geninfo_all_blocks=1 00:23:00.765 --rc geninfo_unexecuted_blocks=1 00:23:00.765 00:23:00.765 ' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.765 --rc genhtml_branch_coverage=1 00:23:00.765 --rc genhtml_function_coverage=1 00:23:00.765 --rc genhtml_legend=1 00:23:00.765 --rc geninfo_all_blocks=1 00:23:00.765 --rc geninfo_unexecuted_blocks=1 00:23:00.765 00:23:00.765 ' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.765 --rc genhtml_branch_coverage=1 00:23:00.765 --rc genhtml_function_coverage=1 00:23:00.765 --rc genhtml_legend=1 00:23:00.765 --rc geninfo_all_blocks=1 00:23:00.765 --rc geninfo_unexecuted_blocks=1 00:23:00.765 00:23:00.765 ' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.765 --rc genhtml_branch_coverage=1 00:23:00.765 --rc genhtml_function_coverage=1 00:23:00.765 --rc genhtml_legend=1 00:23:00.765 --rc geninfo_all_blocks=1 00:23:00.765 --rc geninfo_unexecuted_blocks=1 00:23:00.765 00:23:00.765 ' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.765 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.766 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:00.766 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:23:00.766 00:23:00.766 real 0m0.181s 00:23:00.766 user 0m0.124s 00:23:00.766 sys 0m0.066s 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.766 ************************************ 00:23:00.766 END TEST nvmf_discovery_remove_ifc 00:23:00.766 ************************************ 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.766 ************************************ 00:23:00.766 START TEST nvmf_identify_kernel_target 00:23:00.766 ************************************ 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:00.766 * Looking for test storage... 00:23:00.766 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.766 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.026 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:01.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.027 --rc genhtml_branch_coverage=1 00:23:01.027 --rc genhtml_function_coverage=1 00:23:01.027 --rc genhtml_legend=1 00:23:01.027 --rc geninfo_all_blocks=1 00:23:01.027 --rc geninfo_unexecuted_blocks=1 00:23:01.027 00:23:01.027 ' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:01.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.027 --rc genhtml_branch_coverage=1 00:23:01.027 --rc genhtml_function_coverage=1 00:23:01.027 --rc genhtml_legend=1 00:23:01.027 --rc geninfo_all_blocks=1 00:23:01.027 --rc geninfo_unexecuted_blocks=1 00:23:01.027 00:23:01.027 ' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:01.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.027 --rc genhtml_branch_coverage=1 00:23:01.027 --rc genhtml_function_coverage=1 00:23:01.027 --rc genhtml_legend=1 00:23:01.027 --rc geninfo_all_blocks=1 00:23:01.027 --rc geninfo_unexecuted_blocks=1 00:23:01.027 00:23:01.027 ' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:01.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.027 --rc genhtml_branch_coverage=1 00:23:01.027 --rc genhtml_function_coverage=1 00:23:01.027 --rc genhtml_legend=1 00:23:01.027 --rc geninfo_all_blocks=1 00:23:01.027 --rc geninfo_unexecuted_blocks=1 00:23:01.027 00:23:01.027 ' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.027 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.027 12:37:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:23:03.575 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:23:03.575 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.575 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:23:03.576 Found net devices under 0000:83:00.0: mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:23:03.576 Found net devices under 0000:83:00.1: mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:03.576 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:03.576 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:23:03.576 altname enp131s0f0np0 00:23:03.576 inet 192.168.100.8/24 scope global mlx_0_0 00:23:03.576 valid_lft forever preferred_lft forever 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:03.576 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:03.576 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:23:03.576 altname enp131s0f1np1 00:23:03.576 inet 192.168.100.9/24 scope global mlx_0_1 00:23:03.576 valid_lft forever preferred_lft forever 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:03.576 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:03.577 192.168.100.9' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:03.577 192.168.100.9' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:03.577 192.168.100.9' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:03.577 12:37:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:03.577 12:37:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:03.577 12:37:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:04.516 Waiting for block devices as requested 00:23:04.516 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:04.775 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:04.775 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:04.775 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:05.034 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:05.034 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:05.034 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:05.034 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:05.294 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:05.294 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:05.294 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:05.555 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:05.555 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:05.555 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:05.555 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:05.813 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:05.814 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:05.814 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:06.072 No valid GPT data, bailing 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -a 192.168.100.8 -t rdma -s 4420 00:23:06.072 00:23:06.072 Discovery Log Number of Records 2, Generation counter 2 00:23:06.072 =====Discovery Log Entry 0====== 00:23:06.072 trtype: rdma 00:23:06.072 adrfam: ipv4 00:23:06.072 subtype: current discovery subsystem 00:23:06.072 treq: not specified, sq flow control disable supported 00:23:06.072 portid: 1 00:23:06.072 trsvcid: 4420 00:23:06.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:06.072 traddr: 192.168.100.8 00:23:06.072 eflags: none 00:23:06.072 rdma_prtype: not specified 00:23:06.072 rdma_qptype: connected 00:23:06.072 rdma_cms: rdma-cm 00:23:06.072 rdma_pkey: 0x0000 00:23:06.072 =====Discovery Log Entry 1====== 00:23:06.072 trtype: rdma 00:23:06.072 adrfam: ipv4 00:23:06.072 subtype: nvme subsystem 00:23:06.072 treq: not specified, sq flow control disable supported 00:23:06.072 portid: 1 00:23:06.072 trsvcid: 4420 00:23:06.072 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:06.072 traddr: 192.168.100.8 00:23:06.072 eflags: none 00:23:06.072 rdma_prtype: not specified 00:23:06.072 rdma_qptype: connected 00:23:06.072 rdma_cms: rdma-cm 00:23:06.072 rdma_pkey: 0x0000 00:23:06.072 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:23:06.072 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:06.330 ===================================================== 00:23:06.330 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:06.330 ===================================================== 00:23:06.330 Controller Capabilities/Features 00:23:06.330 ================================ 00:23:06.330 Vendor ID: 0000 00:23:06.330 Subsystem Vendor ID: 0000 00:23:06.330 Serial Number: 4a81a59bf4704258d709 00:23:06.330 Model Number: Linux 00:23:06.330 Firmware Version: 6.8.9-20 00:23:06.330 Recommended Arb Burst: 0 00:23:06.330 IEEE OUI Identifier: 00 00 00 00:23:06.330 Multi-path I/O 00:23:06.330 May have multiple subsystem ports: No 00:23:06.330 May have multiple controllers: No 00:23:06.330 Associated with SR-IOV VF: No 00:23:06.330 Max Data Transfer Size: Unlimited 00:23:06.330 Max Number of Namespaces: 0 00:23:06.330 Max Number of I/O Queues: 1024 00:23:06.330 NVMe Specification Version (VS): 1.3 00:23:06.330 NVMe Specification Version (Identify): 1.3 00:23:06.330 Maximum Queue Entries: 128 00:23:06.330 Contiguous Queues Required: No 00:23:06.330 Arbitration Mechanisms Supported 00:23:06.330 Weighted Round Robin: Not Supported 00:23:06.330 Vendor Specific: Not Supported 00:23:06.330 Reset Timeout: 7500 ms 00:23:06.330 Doorbell Stride: 4 bytes 00:23:06.330 NVM Subsystem Reset: Not Supported 00:23:06.330 Command Sets Supported 00:23:06.330 NVM Command Set: Supported 00:23:06.330 Boot Partition: Not Supported 00:23:06.330 Memory Page Size Minimum: 4096 bytes 00:23:06.330 Memory Page Size Maximum: 4096 bytes 00:23:06.330 Persistent Memory Region: Not Supported 00:23:06.330 Optional Asynchronous Events Supported 00:23:06.330 Namespace Attribute Notices: Not Supported 00:23:06.330 Firmware Activation Notices: Not Supported 00:23:06.330 ANA Change Notices: Not Supported 00:23:06.330 PLE Aggregate Log Change Notices: Not Supported 00:23:06.330 LBA Status Info Alert Notices: Not Supported 00:23:06.330 EGE Aggregate Log Change Notices: Not Supported 00:23:06.330 Normal NVM Subsystem Shutdown event: Not Supported 00:23:06.330 Zone Descriptor Change Notices: Not Supported 00:23:06.330 Discovery Log Change Notices: Supported 00:23:06.330 Controller Attributes 00:23:06.330 128-bit Host Identifier: Not Supported 00:23:06.330 Non-Operational Permissive Mode: Not Supported 00:23:06.330 NVM Sets: Not Supported 00:23:06.330 Read Recovery Levels: Not Supported 00:23:06.330 Endurance Groups: Not Supported 00:23:06.330 Predictable Latency Mode: Not Supported 00:23:06.330 Traffic Based Keep ALive: Not Supported 00:23:06.330 Namespace Granularity: Not Supported 00:23:06.330 SQ Associations: Not Supported 00:23:06.330 UUID List: Not Supported 00:23:06.330 Multi-Domain Subsystem: Not Supported 00:23:06.330 Fixed Capacity Management: Not Supported 00:23:06.330 Variable Capacity Management: Not Supported 00:23:06.330 Delete Endurance Group: Not Supported 00:23:06.330 Delete NVM Set: Not Supported 00:23:06.330 Extended LBA Formats Supported: Not Supported 00:23:06.330 Flexible Data Placement Supported: Not Supported 00:23:06.330 00:23:06.330 Controller Memory Buffer Support 00:23:06.330 ================================ 00:23:06.330 Supported: No 00:23:06.330 00:23:06.330 Persistent Memory Region Support 00:23:06.330 ================================ 00:23:06.330 Supported: No 00:23:06.330 00:23:06.330 Admin Command Set Attributes 00:23:06.330 ============================ 00:23:06.330 Security Send/Receive: Not Supported 00:23:06.330 Format NVM: Not Supported 00:23:06.330 Firmware Activate/Download: Not Supported 00:23:06.330 Namespace Management: Not Supported 00:23:06.330 Device Self-Test: Not Supported 00:23:06.330 Directives: Not Supported 00:23:06.330 NVMe-MI: Not Supported 00:23:06.330 Virtualization Management: Not Supported 00:23:06.330 Doorbell Buffer Config: Not Supported 00:23:06.330 Get LBA Status Capability: Not Supported 00:23:06.330 Command & Feature Lockdown Capability: Not Supported 00:23:06.330 Abort Command Limit: 1 00:23:06.331 Async Event Request Limit: 1 00:23:06.331 Number of Firmware Slots: N/A 00:23:06.331 Firmware Slot 1 Read-Only: N/A 00:23:06.331 Firmware Activation Without Reset: N/A 00:23:06.331 Multiple Update Detection Support: N/A 00:23:06.331 Firmware Update Granularity: No Information Provided 00:23:06.331 Per-Namespace SMART Log: No 00:23:06.331 Asymmetric Namespace Access Log Page: Not Supported 00:23:06.331 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:06.331 Command Effects Log Page: Not Supported 00:23:06.331 Get Log Page Extended Data: Supported 00:23:06.331 Telemetry Log Pages: Not Supported 00:23:06.331 Persistent Event Log Pages: Not Supported 00:23:06.331 Supported Log Pages Log Page: May Support 00:23:06.331 Commands Supported & Effects Log Page: Not Supported 00:23:06.331 Feature Identifiers & Effects Log Page:May Support 00:23:06.331 NVMe-MI Commands & Effects Log Page: May Support 00:23:06.331 Data Area 4 for Telemetry Log: Not Supported 00:23:06.331 Error Log Page Entries Supported: 1 00:23:06.331 Keep Alive: Not Supported 00:23:06.331 00:23:06.331 NVM Command Set Attributes 00:23:06.331 ========================== 00:23:06.331 Submission Queue Entry Size 00:23:06.331 Max: 1 00:23:06.331 Min: 1 00:23:06.331 Completion Queue Entry Size 00:23:06.331 Max: 1 00:23:06.331 Min: 1 00:23:06.331 Number of Namespaces: 0 00:23:06.331 Compare Command: Not Supported 00:23:06.331 Write Uncorrectable Command: Not Supported 00:23:06.331 Dataset Management Command: Not Supported 00:23:06.331 Write Zeroes Command: Not Supported 00:23:06.331 Set Features Save Field: Not Supported 00:23:06.331 Reservations: Not Supported 00:23:06.331 Timestamp: Not Supported 00:23:06.331 Copy: Not Supported 00:23:06.331 Volatile Write Cache: Not Present 00:23:06.331 Atomic Write Unit (Normal): 1 00:23:06.331 Atomic Write Unit (PFail): 1 00:23:06.331 Atomic Compare & Write Unit: 1 00:23:06.331 Fused Compare & Write: Not Supported 00:23:06.331 Scatter-Gather List 00:23:06.331 SGL Command Set: Supported 00:23:06.331 SGL Keyed: Supported 00:23:06.331 SGL Bit Bucket Descriptor: Not Supported 00:23:06.331 SGL Metadata Pointer: Not Supported 00:23:06.331 Oversized SGL: Not Supported 00:23:06.331 SGL Metadata Address: Not Supported 00:23:06.331 SGL Offset: Supported 00:23:06.331 Transport SGL Data Block: Not Supported 00:23:06.331 Replay Protected Memory Block: Not Supported 00:23:06.331 00:23:06.331 Firmware Slot Information 00:23:06.331 ========================= 00:23:06.331 Active slot: 0 00:23:06.331 00:23:06.331 00:23:06.331 Error Log 00:23:06.331 ========= 00:23:06.331 00:23:06.331 Active Namespaces 00:23:06.331 ================= 00:23:06.331 Discovery Log Page 00:23:06.331 ================== 00:23:06.331 Generation Counter: 2 00:23:06.331 Number of Records: 2 00:23:06.331 Record Format: 0 00:23:06.331 00:23:06.331 Discovery Log Entry 0 00:23:06.331 ---------------------- 00:23:06.331 Transport Type: 1 (RDMA) 00:23:06.331 Address Family: 1 (IPv4) 00:23:06.331 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:06.331 Entry Flags: 00:23:06.331 Duplicate Returned Information: 0 00:23:06.331 Explicit Persistent Connection Support for Discovery: 0 00:23:06.331 Transport Requirements: 00:23:06.331 Secure Channel: Not Specified 00:23:06.331 Port ID: 1 (0x0001) 00:23:06.331 Controller ID: 65535 (0xffff) 00:23:06.331 Admin Max SQ Size: 32 00:23:06.331 Transport Service Identifier: 4420 00:23:06.331 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:06.331 Transport Address: 192.168.100.8 00:23:06.331 Transport Specific Address Subtype - RDMA 00:23:06.331 RDMA QP Service Type: 1 (Reliable Connected) 00:23:06.331 RDMA Provider Type: 1 (No provider specified) 00:23:06.331 RDMA CM Service: 1 (RDMA_CM) 00:23:06.331 Discovery Log Entry 1 00:23:06.331 ---------------------- 00:23:06.331 Transport Type: 1 (RDMA) 00:23:06.331 Address Family: 1 (IPv4) 00:23:06.331 Subsystem Type: 2 (NVM Subsystem) 00:23:06.331 Entry Flags: 00:23:06.331 Duplicate Returned Information: 0 00:23:06.331 Explicit Persistent Connection Support for Discovery: 0 00:23:06.331 Transport Requirements: 00:23:06.331 Secure Channel: Not Specified 00:23:06.331 Port ID: 1 (0x0001) 00:23:06.331 Controller ID: 65535 (0xffff) 00:23:06.331 Admin Max SQ Size: 32 00:23:06.331 Transport Service Identifier: 4420 00:23:06.331 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:06.331 Transport Address: 192.168.100.8 00:23:06.331 Transport Specific Address Subtype - RDMA 00:23:06.331 RDMA QP Service Type: 1 (Reliable Connected) 00:23:06.331 RDMA Provider Type: 1 (No provider specified) 00:23:06.331 RDMA CM Service: 1 (RDMA_CM) 00:23:06.331 12:37:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:06.590 get_feature(0x01) failed 00:23:06.590 get_feature(0x02) failed 00:23:06.590 get_feature(0x04) failed 00:23:06.590 ===================================================== 00:23:06.590 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:23:06.590 ===================================================== 00:23:06.590 Controller Capabilities/Features 00:23:06.590 ================================ 00:23:06.590 Vendor ID: 0000 00:23:06.590 Subsystem Vendor ID: 0000 00:23:06.590 Serial Number: 1c6df51c381db48ed9fa 00:23:06.590 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:06.590 Firmware Version: 6.8.9-20 00:23:06.590 Recommended Arb Burst: 6 00:23:06.590 IEEE OUI Identifier: 00 00 00 00:23:06.590 Multi-path I/O 00:23:06.590 May have multiple subsystem ports: Yes 00:23:06.590 May have multiple controllers: Yes 00:23:06.590 Associated with SR-IOV VF: No 00:23:06.590 Max Data Transfer Size: 1048576 00:23:06.590 Max Number of Namespaces: 1024 00:23:06.590 Max Number of I/O Queues: 128 00:23:06.590 NVMe Specification Version (VS): 1.3 00:23:06.590 NVMe Specification Version (Identify): 1.3 00:23:06.590 Maximum Queue Entries: 128 00:23:06.590 Contiguous Queues Required: No 00:23:06.590 Arbitration Mechanisms Supported 00:23:06.590 Weighted Round Robin: Not Supported 00:23:06.590 Vendor Specific: Not Supported 00:23:06.590 Reset Timeout: 7500 ms 00:23:06.590 Doorbell Stride: 4 bytes 00:23:06.590 NVM Subsystem Reset: Not Supported 00:23:06.590 Command Sets Supported 00:23:06.590 NVM Command Set: Supported 00:23:06.590 Boot Partition: Not Supported 00:23:06.590 Memory Page Size Minimum: 4096 bytes 00:23:06.590 Memory Page Size Maximum: 4096 bytes 00:23:06.590 Persistent Memory Region: Not Supported 00:23:06.590 Optional Asynchronous Events Supported 00:23:06.590 Namespace Attribute Notices: Supported 00:23:06.590 Firmware Activation Notices: Not Supported 00:23:06.590 ANA Change Notices: Supported 00:23:06.590 PLE Aggregate Log Change Notices: Not Supported 00:23:06.590 LBA Status Info Alert Notices: Not Supported 00:23:06.590 EGE Aggregate Log Change Notices: Not Supported 00:23:06.590 Normal NVM Subsystem Shutdown event: Not Supported 00:23:06.590 Zone Descriptor Change Notices: Not Supported 00:23:06.590 Discovery Log Change Notices: Not Supported 00:23:06.590 Controller Attributes 00:23:06.590 128-bit Host Identifier: Supported 00:23:06.590 Non-Operational Permissive Mode: Not Supported 00:23:06.591 NVM Sets: Not Supported 00:23:06.591 Read Recovery Levels: Not Supported 00:23:06.591 Endurance Groups: Not Supported 00:23:06.591 Predictable Latency Mode: Not Supported 00:23:06.591 Traffic Based Keep ALive: Supported 00:23:06.591 Namespace Granularity: Not Supported 00:23:06.591 SQ Associations: Not Supported 00:23:06.591 UUID List: Not Supported 00:23:06.591 Multi-Domain Subsystem: Not Supported 00:23:06.591 Fixed Capacity Management: Not Supported 00:23:06.591 Variable Capacity Management: Not Supported 00:23:06.591 Delete Endurance Group: Not Supported 00:23:06.591 Delete NVM Set: Not Supported 00:23:06.591 Extended LBA Formats Supported: Not Supported 00:23:06.591 Flexible Data Placement Supported: Not Supported 00:23:06.591 00:23:06.591 Controller Memory Buffer Support 00:23:06.591 ================================ 00:23:06.591 Supported: No 00:23:06.591 00:23:06.591 Persistent Memory Region Support 00:23:06.591 ================================ 00:23:06.591 Supported: No 00:23:06.591 00:23:06.591 Admin Command Set Attributes 00:23:06.591 ============================ 00:23:06.591 Security Send/Receive: Not Supported 00:23:06.591 Format NVM: Not Supported 00:23:06.591 Firmware Activate/Download: Not Supported 00:23:06.591 Namespace Management: Not Supported 00:23:06.591 Device Self-Test: Not Supported 00:23:06.591 Directives: Not Supported 00:23:06.591 NVMe-MI: Not Supported 00:23:06.591 Virtualization Management: Not Supported 00:23:06.591 Doorbell Buffer Config: Not Supported 00:23:06.591 Get LBA Status Capability: Not Supported 00:23:06.591 Command & Feature Lockdown Capability: Not Supported 00:23:06.591 Abort Command Limit: 4 00:23:06.591 Async Event Request Limit: 4 00:23:06.591 Number of Firmware Slots: N/A 00:23:06.591 Firmware Slot 1 Read-Only: N/A 00:23:06.591 Firmware Activation Without Reset: N/A 00:23:06.591 Multiple Update Detection Support: N/A 00:23:06.591 Firmware Update Granularity: No Information Provided 00:23:06.591 Per-Namespace SMART Log: Yes 00:23:06.591 Asymmetric Namespace Access Log Page: Supported 00:23:06.591 ANA Transition Time : 10 sec 00:23:06.591 00:23:06.591 Asymmetric Namespace Access Capabilities 00:23:06.591 ANA Optimized State : Supported 00:23:06.591 ANA Non-Optimized State : Supported 00:23:06.591 ANA Inaccessible State : Supported 00:23:06.591 ANA Persistent Loss State : Supported 00:23:06.591 ANA Change State : Supported 00:23:06.591 ANAGRPID is not changed : No 00:23:06.591 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:06.591 00:23:06.591 ANA Group Identifier Maximum : 128 00:23:06.591 Number of ANA Group Identifiers : 128 00:23:06.591 Max Number of Allowed Namespaces : 1024 00:23:06.591 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:06.591 Command Effects Log Page: Supported 00:23:06.591 Get Log Page Extended Data: Supported 00:23:06.591 Telemetry Log Pages: Not Supported 00:23:06.591 Persistent Event Log Pages: Not Supported 00:23:06.591 Supported Log Pages Log Page: May Support 00:23:06.591 Commands Supported & Effects Log Page: Not Supported 00:23:06.591 Feature Identifiers & Effects Log Page:May Support 00:23:06.591 NVMe-MI Commands & Effects Log Page: May Support 00:23:06.591 Data Area 4 for Telemetry Log: Not Supported 00:23:06.591 Error Log Page Entries Supported: 128 00:23:06.591 Keep Alive: Supported 00:23:06.591 Keep Alive Granularity: 1000 ms 00:23:06.591 00:23:06.591 NVM Command Set Attributes 00:23:06.591 ========================== 00:23:06.591 Submission Queue Entry Size 00:23:06.591 Max: 64 00:23:06.591 Min: 64 00:23:06.591 Completion Queue Entry Size 00:23:06.591 Max: 16 00:23:06.591 Min: 16 00:23:06.591 Number of Namespaces: 1024 00:23:06.591 Compare Command: Not Supported 00:23:06.591 Write Uncorrectable Command: Not Supported 00:23:06.591 Dataset Management Command: Supported 00:23:06.591 Write Zeroes Command: Supported 00:23:06.591 Set Features Save Field: Not Supported 00:23:06.591 Reservations: Not Supported 00:23:06.591 Timestamp: Not Supported 00:23:06.591 Copy: Not Supported 00:23:06.591 Volatile Write Cache: Present 00:23:06.591 Atomic Write Unit (Normal): 1 00:23:06.591 Atomic Write Unit (PFail): 1 00:23:06.591 Atomic Compare & Write Unit: 1 00:23:06.591 Fused Compare & Write: Not Supported 00:23:06.591 Scatter-Gather List 00:23:06.591 SGL Command Set: Supported 00:23:06.591 SGL Keyed: Supported 00:23:06.591 SGL Bit Bucket Descriptor: Not Supported 00:23:06.591 SGL Metadata Pointer: Not Supported 00:23:06.591 Oversized SGL: Not Supported 00:23:06.591 SGL Metadata Address: Not Supported 00:23:06.591 SGL Offset: Supported 00:23:06.591 Transport SGL Data Block: Not Supported 00:23:06.591 Replay Protected Memory Block: Not Supported 00:23:06.591 00:23:06.591 Firmware Slot Information 00:23:06.591 ========================= 00:23:06.591 Active slot: 0 00:23:06.591 00:23:06.591 Asymmetric Namespace Access 00:23:06.591 =========================== 00:23:06.591 Change Count : 0 00:23:06.591 Number of ANA Group Descriptors : 1 00:23:06.591 ANA Group Descriptor : 0 00:23:06.591 ANA Group ID : 1 00:23:06.591 Number of NSID Values : 1 00:23:06.591 Change Count : 0 00:23:06.591 ANA State : 1 00:23:06.591 Namespace Identifier : 1 00:23:06.591 00:23:06.591 Commands Supported and Effects 00:23:06.591 ============================== 00:23:06.591 Admin Commands 00:23:06.591 -------------- 00:23:06.591 Get Log Page (02h): Supported 00:23:06.591 Identify (06h): Supported 00:23:06.591 Abort (08h): Supported 00:23:06.591 Set Features (09h): Supported 00:23:06.591 Get Features (0Ah): Supported 00:23:06.591 Asynchronous Event Request (0Ch): Supported 00:23:06.591 Keep Alive (18h): Supported 00:23:06.591 I/O Commands 00:23:06.591 ------------ 00:23:06.591 Flush (00h): Supported 00:23:06.591 Write (01h): Supported LBA-Change 00:23:06.591 Read (02h): Supported 00:23:06.591 Write Zeroes (08h): Supported LBA-Change 00:23:06.591 Dataset Management (09h): Supported 00:23:06.591 00:23:06.591 Error Log 00:23:06.591 ========= 00:23:06.591 Entry: 0 00:23:06.591 Error Count: 0x3 00:23:06.591 Submission Queue Id: 0x0 00:23:06.591 Command Id: 0x5 00:23:06.591 Phase Bit: 0 00:23:06.591 Status Code: 0x2 00:23:06.591 Status Code Type: 0x0 00:23:06.591 Do Not Retry: 1 00:23:06.591 Error Location: 0x28 00:23:06.591 LBA: 0x0 00:23:06.591 Namespace: 0x0 00:23:06.591 Vendor Log Page: 0x0 00:23:06.591 ----------- 00:23:06.591 Entry: 1 00:23:06.591 Error Count: 0x2 00:23:06.591 Submission Queue Id: 0x0 00:23:06.591 Command Id: 0x5 00:23:06.591 Phase Bit: 0 00:23:06.591 Status Code: 0x2 00:23:06.591 Status Code Type: 0x0 00:23:06.591 Do Not Retry: 1 00:23:06.591 Error Location: 0x28 00:23:06.591 LBA: 0x0 00:23:06.592 Namespace: 0x0 00:23:06.592 Vendor Log Page: 0x0 00:23:06.592 ----------- 00:23:06.592 Entry: 2 00:23:06.592 Error Count: 0x1 00:23:06.592 Submission Queue Id: 0x0 00:23:06.592 Command Id: 0x0 00:23:06.592 Phase Bit: 0 00:23:06.592 Status Code: 0x2 00:23:06.592 Status Code Type: 0x0 00:23:06.592 Do Not Retry: 1 00:23:06.592 Error Location: 0x28 00:23:06.592 LBA: 0x0 00:23:06.592 Namespace: 0x0 00:23:06.592 Vendor Log Page: 0x0 00:23:06.592 00:23:06.592 Number of Queues 00:23:06.592 ================ 00:23:06.592 Number of I/O Submission Queues: 128 00:23:06.592 Number of I/O Completion Queues: 128 00:23:06.592 00:23:06.592 ZNS Specific Controller Data 00:23:06.592 ============================ 00:23:06.592 Zone Append Size Limit: 0 00:23:06.592 00:23:06.592 00:23:06.592 Active Namespaces 00:23:06.592 ================= 00:23:06.592 get_feature(0x05) failed 00:23:06.592 Namespace ID:1 00:23:06.592 Command Set Identifier: NVM (00h) 00:23:06.592 Deallocate: Supported 00:23:06.592 Deallocated/Unwritten Error: Not Supported 00:23:06.592 Deallocated Read Value: Unknown 00:23:06.592 Deallocate in Write Zeroes: Not Supported 00:23:06.592 Deallocated Guard Field: 0xFFFF 00:23:06.592 Flush: Supported 00:23:06.592 Reservation: Not Supported 00:23:06.592 Namespace Sharing Capabilities: Multiple Controllers 00:23:06.592 Size (in LBAs): 1953525168 (931GiB) 00:23:06.592 Capacity (in LBAs): 1953525168 (931GiB) 00:23:06.592 Utilization (in LBAs): 1953525168 (931GiB) 00:23:06.592 UUID: c49d1f9d-6bbd-4aeb-93d2-2b1d85c5e612 00:23:06.592 Thin Provisioning: Not Supported 00:23:06.592 Per-NS Atomic Units: Yes 00:23:06.592 Atomic Boundary Size (Normal): 0 00:23:06.592 Atomic Boundary Size (PFail): 0 00:23:06.592 Atomic Boundary Offset: 0 00:23:06.592 NGUID/EUI64 Never Reused: No 00:23:06.592 ANA group ID: 1 00:23:06.592 Namespace Write Protected: No 00:23:06.592 Number of LBA Formats: 1 00:23:06.592 Current LBA Format: LBA Format #00 00:23:06.592 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:06.592 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:06.592 rmmod nvme_rdma 00:23:06.592 rmmod nvme_fabrics 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:23:06.592 12:37:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:07.967 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:07.967 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:07.967 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:07.967 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:07.967 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:07.967 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:07.967 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:08.227 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:08.227 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:08.227 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:09.165 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:09.165 00:23:09.165 real 0m8.286s 00:23:09.165 user 0m2.376s 00:23:09.165 sys 0m4.101s 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.165 ************************************ 00:23:09.165 END TEST nvmf_identify_kernel_target 00:23:09.165 ************************************ 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.165 ************************************ 00:23:09.165 START TEST nvmf_auth_host 00:23:09.165 ************************************ 00:23:09.165 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:09.165 * Looking for test storage... 00:23:09.165 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:09.166 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.426 --rc genhtml_branch_coverage=1 00:23:09.426 --rc genhtml_function_coverage=1 00:23:09.426 --rc genhtml_legend=1 00:23:09.426 --rc geninfo_all_blocks=1 00:23:09.426 --rc geninfo_unexecuted_blocks=1 00:23:09.426 00:23:09.426 ' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.426 --rc genhtml_branch_coverage=1 00:23:09.426 --rc genhtml_function_coverage=1 00:23:09.426 --rc genhtml_legend=1 00:23:09.426 --rc geninfo_all_blocks=1 00:23:09.426 --rc geninfo_unexecuted_blocks=1 00:23:09.426 00:23:09.426 ' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.426 --rc genhtml_branch_coverage=1 00:23:09.426 --rc genhtml_function_coverage=1 00:23:09.426 --rc genhtml_legend=1 00:23:09.426 --rc geninfo_all_blocks=1 00:23:09.426 --rc geninfo_unexecuted_blocks=1 00:23:09.426 00:23:09.426 ' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.426 --rc genhtml_branch_coverage=1 00:23:09.426 --rc genhtml_function_coverage=1 00:23:09.426 --rc genhtml_legend=1 00:23:09.426 --rc geninfo_all_blocks=1 00:23:09.426 --rc geninfo_unexecuted_blocks=1 00:23:09.426 00:23:09.426 ' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.426 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.426 12:37:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:23:11.962 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:23:11.962 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:23:11.962 Found net devices under 0000:83:00.0: mlx_0_0 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:23:11.962 Found net devices under 0000:83:00.1: mlx_0_1 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:11.962 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:11.963 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:11.963 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:23:11.963 altname enp131s0f0np0 00:23:11.963 inet 192.168.100.8/24 scope global mlx_0_0 00:23:11.963 valid_lft forever preferred_lft forever 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:11.963 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:11.963 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:23:11.963 altname enp131s0f1np1 00:23:11.963 inet 192.168.100.9/24 scope global mlx_0_1 00:23:11.963 valid_lft forever preferred_lft forever 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:11.963 192.168.100.9' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:11.963 192.168.100.9' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:11.963 192.168.100.9' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2828889 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2828889 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2828889 ']' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2f4e4e4d228a45e968c8ada098dd05af 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0XI 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2f4e4e4d228a45e968c8ada098dd05af 0 00:23:11.963 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2f4e4e4d228a45e968c8ada098dd05af 0 00:23:11.964 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:11.964 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:11.964 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2f4e4e4d228a45e968c8ada098dd05af 00:23:11.964 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:11.964 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0XI 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0XI 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0XI 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8f91bdf539dc627b61c3420ff24c1a6751d2a9408aa6e8607aaff9742a2e0203 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AOO 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8f91bdf539dc627b61c3420ff24c1a6751d2a9408aa6e8607aaff9742a2e0203 3 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8f91bdf539dc627b61c3420ff24c1a6751d2a9408aa6e8607aaff9742a2e0203 3 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8f91bdf539dc627b61c3420ff24c1a6751d2a9408aa6e8607aaff9742a2e0203 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AOO 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AOO 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AOO 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.222 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1595e99056c206a40f310c85d7c553e5abd6c0c35f2622db 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sqX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1595e99056c206a40f310c85d7c553e5abd6c0c35f2622db 0 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1595e99056c206a40f310c85d7c553e5abd6c0c35f2622db 0 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1595e99056c206a40f310c85d7c553e5abd6c0c35f2622db 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sqX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sqX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sqX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=536166d9f0cf22dd278f6c900952ad699c42945dfca3e1ba 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6LI 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 536166d9f0cf22dd278f6c900952ad699c42945dfca3e1ba 2 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 536166d9f0cf22dd278f6c900952ad699c42945dfca3e1ba 2 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=536166d9f0cf22dd278f6c900952ad699c42945dfca3e1ba 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6LI 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6LI 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6LI 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb3b9b44f19d9bb1f69b721a2f791af8 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ryE 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb3b9b44f19d9bb1f69b721a2f791af8 1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb3b9b44f19d9bb1f69b721a2f791af8 1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb3b9b44f19d9bb1f69b721a2f791af8 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:12.223 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.482 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ryE 00:23:12.482 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ryE 00:23:12.482 12:37:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ryE 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=180e9c56569876b20a3e9f1efa268fb5 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VQF 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 180e9c56569876b20a3e9f1efa268fb5 1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 180e9c56569876b20a3e9f1efa268fb5 1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=180e9c56569876b20a3e9f1efa268fb5 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VQF 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VQF 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VQF 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a4f2cf8e7bfb10b47f66047756b1bc5326d376889d85dcbf 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ge8 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a4f2cf8e7bfb10b47f66047756b1bc5326d376889d85dcbf 2 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a4f2cf8e7bfb10b47f66047756b1bc5326d376889d85dcbf 2 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a4f2cf8e7bfb10b47f66047756b1bc5326d376889d85dcbf 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ge8 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ge8 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ge8 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2cff9e08593b8b7d6510c7fe5b1e0162 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4VL 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2cff9e08593b8b7d6510c7fe5b1e0162 0 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2cff9e08593b8b7d6510c7fe5b1e0162 0 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2cff9e08593b8b7d6510c7fe5b1e0162 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4VL 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4VL 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4VL 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aefad25c85e517013608e82928f080063b3ed1d4d16eb3a6a7538e0bc1180cd9 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.akK 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aefad25c85e517013608e82928f080063b3ed1d4d16eb3a6a7538e0bc1180cd9 3 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aefad25c85e517013608e82928f080063b3ed1d4d16eb3a6a7538e0bc1180cd9 3 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aefad25c85e517013608e82928f080063b3ed1d4d16eb3a6a7538e0bc1180cd9 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.akK 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.akK 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.akK 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2828889 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2828889 ']' 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.482 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.048 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.048 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:13.048 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.048 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0XI 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AOO ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AOO 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sqX 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6LI ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6LI 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ryE 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VQF ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VQF 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ge8 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4VL ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4VL 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.akK 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:13.049 12:37:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:13.988 Waiting for block devices as requested 00:23:14.246 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:14.246 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:14.246 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:14.503 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:14.503 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:14.503 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:14.503 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:14.761 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:14.761 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:14.761 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:15.018 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:15.018 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:15.018 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:15.018 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:15.276 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:15.276 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:15.276 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:15.843 No valid GPT data, bailing 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:15.843 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae --hostid=f19ece52-b769-e111-bd1d-001e673d80ae -a 192.168.100.8 -t rdma -s 4420 00:23:16.102 00:23:16.102 Discovery Log Number of Records 2, Generation counter 2 00:23:16.102 =====Discovery Log Entry 0====== 00:23:16.102 trtype: rdma 00:23:16.102 adrfam: ipv4 00:23:16.102 subtype: current discovery subsystem 00:23:16.102 treq: not specified, sq flow control disable supported 00:23:16.102 portid: 1 00:23:16.102 trsvcid: 4420 00:23:16.102 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:16.102 traddr: 192.168.100.8 00:23:16.102 eflags: none 00:23:16.102 rdma_prtype: not specified 00:23:16.102 rdma_qptype: connected 00:23:16.102 rdma_cms: rdma-cm 00:23:16.102 rdma_pkey: 0x0000 00:23:16.102 =====Discovery Log Entry 1====== 00:23:16.102 trtype: rdma 00:23:16.102 adrfam: ipv4 00:23:16.102 subtype: nvme subsystem 00:23:16.102 treq: not specified, sq flow control disable supported 00:23:16.102 portid: 1 00:23:16.102 trsvcid: 4420 00:23:16.102 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:16.102 traddr: 192.168.100.8 00:23:16.102 eflags: none 00:23:16.102 rdma_prtype: not specified 00:23:16.102 rdma_qptype: connected 00:23:16.102 rdma_cms: rdma-cm 00:23:16.102 rdma_pkey: 0x0000 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:16.102 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.103 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 nvme0n1 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.361 12:37:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.361 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.620 nvme0n1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.620 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.621 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.879 nvme0n1 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.879 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.139 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.398 nvme0n1 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.398 12:37:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.398 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.399 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.399 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.659 nvme0n1 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.659 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.918 nvme0n1 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.918 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.176 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.434 nvme0n1 00:23:18.434 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.434 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.434 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.434 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.434 12:37:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:18.434 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.435 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.435 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.693 nvme0n1 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.693 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.951 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.210 nvme0n1 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.210 12:37:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.468 nvme0n1 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.468 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.727 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 nvme0n1 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.987 12:37:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.554 nvme0n1 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.554 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.555 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.121 nvme0n1 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.121 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.122 12:37:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.687 nvme0n1 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.687 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.688 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.254 nvme0n1 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.254 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.255 12:37:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.821 nvme0n1 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.821 12:37:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.387 nvme0n1 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.387 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.645 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.233 nvme0n1 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.233 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.234 12:37:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.192 nvme0n1 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.192 12:37:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 nvme0n1 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.125 12:37:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.059 nvme0n1 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:27.059 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.060 12:37:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.432 nvme0n1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.432 12:37:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.805 nvme0n1 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:29.805 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.806 12:37:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.177 nvme0n1 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.177 12:37:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.554 nvme0n1 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.554 12:37:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.929 nvme0n1 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.929 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.930 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.188 nvme0n1 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.188 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.189 12:37:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.447 nvme0n1 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.447 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.705 nvme0n1 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.705 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:34.963 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.964 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.964 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.222 nvme0n1 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.222 12:37:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.481 nvme0n1 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.481 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.740 nvme0n1 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.740 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.998 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.257 nvme0n1 00:23:36.257 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.257 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.257 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.257 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.257 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.258 12:37:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 nvme0n1 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.517 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.775 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.776 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.034 nvme0n1 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.034 12:37:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.293 nvme0n1 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.293 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.551 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.810 nvme0n1 00:23:37.810 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.810 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.810 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.810 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.810 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.068 12:37:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.327 nvme0n1 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.327 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.585 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 nvme0n1 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.152 12:37:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.410 nvme0n1 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.411 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.668 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.669 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.235 nvme0n1 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.235 12:37:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.170 nvme0n1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.170 12:37:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.737 nvme0n1 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.737 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.995 12:37:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.930 nvme0n1 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.930 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.931 12:37:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.497 nvme0n1 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:43.497 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:43.498 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:43.498 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.498 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.498 12:37:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.437 nvme0n1 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.437 12:37:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.813 nvme0n1 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.813 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.814 12:37:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.187 nvme0n1 00:23:47.187 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.187 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.188 12:37:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.563 nvme0n1 00:23:48.563 12:37:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.563 12:37:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.563 12:37:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.563 12:37:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.563 12:37:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.563 12:37:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.937 nvme0n1 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.937 12:37:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 nvme0n1 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 nvme0n1 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 12:37:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.313 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.314 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.573 nvme0n1 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.573 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.831 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.832 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.090 nvme0n1 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.090 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.091 12:37:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.349 nvme0n1 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.349 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.608 nvme0n1 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.608 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.867 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.126 nvme0n1 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:53.126 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.127 12:37:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 nvme0n1 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.385 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.642 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.643 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.900 nvme0n1 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.900 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.901 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.159 nvme0n1 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.159 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.418 12:37:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.677 nvme0n1 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.677 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.243 nvme0n1 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.243 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.244 12:38:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.810 nvme0n1 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.810 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.811 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.436 nvme0n1 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:56.436 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:56.437 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:56.437 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:56.437 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.437 12:38:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.714 nvme0n1 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.714 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.715 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.281 nvme0n1 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:57.281 12:38:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.281 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 nvme0n1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.214 12:38:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.149 nvme0n1 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:23:59.149 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.150 12:38:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.716 nvme0n1 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.716 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.975 12:38:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.541 nvme0n1 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.541 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.800 12:38:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.367 nvme0n1 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.367 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:01.625 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY0ZTRlNGQyMjhhNDVlOTY4YzhhZGEwOThkZDA1YWbSkymU: 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: ]] 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGY5MWJkZjUzOWRjNjI3YjYxYzM0MjBmZjI0YzFhNjc1MWQyYTk0MDhhYTZlODYwN2FhZmY5NzQyYTJlMDIwM4K34fQ=: 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.626 12:38:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.000 nvme0n1 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.000 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 12:38:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.376 nvme0n1 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:04.376 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.377 12:38:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.752 nvme0n1 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.752 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTRmMmNmOGU3YmZiMTBiNDdmNjYwNDc3NTZiMWJjNTMyNmQzNzY4ODlkODVkY2JmJrFzrw==: 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: ]] 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmZjllMDg1OTNiOGI3ZDY1MTBjN2ZlNWIxZTAxNjKS4doq: 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:05.753 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:05.754 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:05.754 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.754 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.754 12:38:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.129 nvme0n1 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWVmYWQyNWM4NWU1MTcwMTM2MDhlODI5MjhmMDgwMDYzYjNlZDFkNGQxNmViM2E2YTc1MzhlMGJjMTE4MGNkOUyTjcA=: 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:07.129 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:07.130 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.130 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.130 12:38:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.506 nvme0n1 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.506 12:38:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.507 request: 00:24:08.507 { 00:24:08.507 "name": "nvme0", 00:24:08.507 "trtype": "rdma", 00:24:08.507 "traddr": "192.168.100.8", 00:24:08.507 "adrfam": "ipv4", 00:24:08.507 "trsvcid": "4420", 00:24:08.507 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.507 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.507 "prchk_reftag": false, 00:24:08.507 "prchk_guard": false, 00:24:08.507 "hdgst": false, 00:24:08.507 "ddgst": false, 00:24:08.507 "allow_unrecognized_csi": false, 00:24:08.507 "method": "bdev_nvme_attach_controller", 00:24:08.507 "req_id": 1 00:24:08.507 } 00:24:08.507 Got JSON-RPC error response 00:24:08.507 response: 00:24:08.507 { 00:24:08.507 "code": -5, 00:24:08.507 "message": "Input/output error" 00:24:08.507 } 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.507 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.765 request: 00:24:08.765 { 00:24:08.765 "name": "nvme0", 00:24:08.765 "trtype": "rdma", 00:24:08.766 "traddr": "192.168.100.8", 00:24:08.766 "adrfam": "ipv4", 00:24:08.766 "trsvcid": "4420", 00:24:08.766 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.766 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.766 "prchk_reftag": false, 00:24:08.766 "prchk_guard": false, 00:24:08.766 "hdgst": false, 00:24:08.766 "ddgst": false, 00:24:08.766 "dhchap_key": "key2", 00:24:08.766 "allow_unrecognized_csi": false, 00:24:08.766 "method": "bdev_nvme_attach_controller", 00:24:08.766 "req_id": 1 00:24:08.766 } 00:24:08.766 Got JSON-RPC error response 00:24:08.766 response: 00:24:08.766 { 00:24:08.766 "code": -5, 00:24:08.766 "message": "Input/output error" 00:24:08.766 } 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.766 request: 00:24:08.766 { 00:24:08.766 "name": "nvme0", 00:24:08.766 "trtype": "rdma", 00:24:08.766 "traddr": "192.168.100.8", 00:24:08.766 "adrfam": "ipv4", 00:24:08.766 "trsvcid": "4420", 00:24:08.766 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:08.766 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:08.766 "prchk_reftag": false, 00:24:08.766 "prchk_guard": false, 00:24:08.766 "hdgst": false, 00:24:08.766 "ddgst": false, 00:24:08.766 "dhchap_key": "key1", 00:24:08.766 "dhchap_ctrlr_key": "ckey2", 00:24:08.766 "allow_unrecognized_csi": false, 00:24:08.766 "method": "bdev_nvme_attach_controller", 00:24:08.766 "req_id": 1 00:24:08.766 } 00:24:08.766 Got JSON-RPC error response 00:24:08.766 response: 00:24:08.766 { 00:24:08.766 "code": -5, 00:24:08.766 "message": "Input/output error" 00:24:08.766 } 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.766 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.024 nvme0n1 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:09.024 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:24:09.025 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:09.025 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.025 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.025 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.284 request: 00:24:09.284 { 00:24:09.284 "name": "nvme0", 00:24:09.284 "dhchap_key": "key1", 00:24:09.284 "dhchap_ctrlr_key": "ckey2", 00:24:09.284 "method": "bdev_nvme_set_keys", 00:24:09.284 "req_id": 1 00:24:09.284 } 00:24:09.284 Got JSON-RPC error response 00:24:09.284 response: 00:24:09.284 { 00:24:09.284 "code": -13, 00:24:09.284 "message": "Permission denied" 00:24:09.284 } 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:09.284 12:38:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:10.659 12:38:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:10.659 12:38:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.659 12:38:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.659 12:38:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.659 12:38:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.659 12:38:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:10.659 12:38:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU5NWU5OTA1NmMyMDZhNDBmMzEwYzg1ZDdjNTUzZTVhYmQ2YzBjMzVmMjYyMmRiu+xWyg==: 00:24:11.594 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTM2MTY2ZDlmMGNmMjJkZDI3OGY2YzkwMDk1MmFkNjk5YzQyOTQ1ZGZjYTNlMWJhbFNk3g==: 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.595 nvme0n1 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2IzYjliNDRmMTlkOWJiMWY2OWI3MjFhMmY3OTFhZjhB2kM7: 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: ]] 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTgwZTljNTY1Njk4NzZiMjBhM2U5ZjFlZmEyNjhmYjWLJCC0: 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.595 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.853 request: 00:24:11.853 { 00:24:11.853 "name": "nvme0", 00:24:11.853 "dhchap_key": "key2", 00:24:11.853 "dhchap_ctrlr_key": "ckey1", 00:24:11.853 "method": "bdev_nvme_set_keys", 00:24:11.853 "req_id": 1 00:24:11.853 } 00:24:11.853 Got JSON-RPC error response 00:24:11.853 response: 00:24:11.853 { 00:24:11.853 "code": -13, 00:24:11.853 "message": "Permission denied" 00:24:11.853 } 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:11.853 12:38:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:12.788 12:38:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.163 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:14.164 rmmod nvme_rdma 00:24:14.164 rmmod nvme_fabrics 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2828889 ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2828889 ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828889' 00:24:14.164 killing process with pid 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2828889 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:24:14.164 12:38:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:15.546 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:24:15.546 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:24:15.546 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:24:16.485 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:16.485 12:38:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0XI /tmp/spdk.key-null.sqX /tmp/spdk.key-sha256.ryE /tmp/spdk.key-sha384.Ge8 /tmp/spdk.key-sha512.akK /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:24:16.485 12:38:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:17.863 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:17.863 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:17.863 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:17.863 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:17.863 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:17.863 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:17.863 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:17.863 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:17.863 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:17.863 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:17.863 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:17.863 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:17.863 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:17.863 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:17.863 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:17.863 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:17.863 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:17.863 00:24:17.863 real 1m8.743s 00:24:17.863 user 1m7.915s 00:24:17.863 sys 0m6.937s 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.863 ************************************ 00:24:17.863 END TEST nvmf_auth_host 00:24:17.863 ************************************ 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.863 ************************************ 00:24:17.863 START TEST nvmf_bdevperf 00:24:17.863 ************************************ 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:17.863 * Looking for test storage... 00:24:17.863 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.863 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.123 --rc genhtml_branch_coverage=1 00:24:18.123 --rc genhtml_function_coverage=1 00:24:18.123 --rc genhtml_legend=1 00:24:18.123 --rc geninfo_all_blocks=1 00:24:18.123 --rc geninfo_unexecuted_blocks=1 00:24:18.123 00:24:18.123 ' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.123 --rc genhtml_branch_coverage=1 00:24:18.123 --rc genhtml_function_coverage=1 00:24:18.123 --rc genhtml_legend=1 00:24:18.123 --rc geninfo_all_blocks=1 00:24:18.123 --rc geninfo_unexecuted_blocks=1 00:24:18.123 00:24:18.123 ' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.123 --rc genhtml_branch_coverage=1 00:24:18.123 --rc genhtml_function_coverage=1 00:24:18.123 --rc genhtml_legend=1 00:24:18.123 --rc geninfo_all_blocks=1 00:24:18.123 --rc geninfo_unexecuted_blocks=1 00:24:18.123 00:24:18.123 ' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.123 --rc genhtml_branch_coverage=1 00:24:18.123 --rc genhtml_function_coverage=1 00:24:18.123 --rc genhtml_legend=1 00:24:18.123 --rc geninfo_all_blocks=1 00:24:18.123 --rc geninfo_unexecuted_blocks=1 00:24:18.123 00:24:18.123 ' 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.123 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.124 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.124 12:38:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.665 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:24:20.666 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:24:20.666 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:24:20.666 Found net devices under 0000:83:00.0: mlx_0_0 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:24:20.666 Found net devices under 0000:83:00.1: mlx_0_1 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:20.666 12:38:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:20.666 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.666 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:24:20.666 altname enp131s0f0np0 00:24:20.666 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.666 valid_lft forever preferred_lft forever 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:20.666 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.666 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:24:20.666 altname enp131s0f1np1 00:24:20.666 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.666 valid_lft forever preferred_lft forever 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.666 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.667 192.168.100.9' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:20.667 192.168.100.9' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:20.667 192.168.100.9' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2837832 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2837832 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2837832 ']' 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.667 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.667 [2024-11-20 12:38:26.203743] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:20.667 [2024-11-20 12:38:26.203850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.667 [2024-11-20 12:38:26.277023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:20.667 [2024-11-20 12:38:26.341745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.667 [2024-11-20 12:38:26.341801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.667 [2024-11-20 12:38:26.341817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.667 [2024-11-20 12:38:26.341830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.667 [2024-11-20 12:38:26.341841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.667 [2024-11-20 12:38:26.343098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.667 [2024-11-20 12:38:26.343149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.667 [2024-11-20 12:38:26.343153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.926 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.926 [2024-11-20 12:38:26.551835] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x93d590/0x941a80) succeed. 00:24:20.926 [2024-11-20 12:38:26.566604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x93eb80/0x983120) succeed. 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.184 Malloc0 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.184 [2024-11-20 12:38:26.747431] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.184 { 00:24:21.184 "params": { 00:24:21.184 "name": "Nvme$subsystem", 00:24:21.184 "trtype": "$TEST_TRANSPORT", 00:24:21.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.184 "adrfam": "ipv4", 00:24:21.184 "trsvcid": "$NVMF_PORT", 00:24:21.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.184 "hdgst": ${hdgst:-false}, 00:24:21.184 "ddgst": ${ddgst:-false} 00:24:21.184 }, 00:24:21.184 "method": "bdev_nvme_attach_controller" 00:24:21.184 } 00:24:21.184 EOF 00:24:21.184 )") 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:21.184 12:38:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:21.184 "params": { 00:24:21.184 "name": "Nvme1", 00:24:21.184 "trtype": "rdma", 00:24:21.184 "traddr": "192.168.100.8", 00:24:21.184 "adrfam": "ipv4", 00:24:21.184 "trsvcid": "4420", 00:24:21.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.185 "hdgst": false, 00:24:21.185 "ddgst": false 00:24:21.185 }, 00:24:21.185 "method": "bdev_nvme_attach_controller" 00:24:21.185 }' 00:24:21.185 [2024-11-20 12:38:26.804269] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:21.185 [2024-11-20 12:38:26.804363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837948 ] 00:24:21.185 [2024-11-20 12:38:26.877047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.185 [2024-11-20 12:38:26.941701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.444 Running I/O for 1 seconds... 00:24:22.820 11263.00 IOPS, 44.00 MiB/s 00:24:22.820 Latency(us) 00:24:22.820 [2024-11-20T11:38:28.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:22.820 Verification LBA range: start 0x0 length 0x4000 00:24:22.820 Nvme1n1 : 1.02 11216.56 43.81 0.00 0.00 11331.28 3713.71 20777.34 00:24:22.820 [2024-11-20T11:38:28.586Z] =================================================================================================================== 00:24:22.820 [2024-11-20T11:38:28.586Z] Total : 11216.56 43.81 0.00 0.00 11331.28 3713.71 20777.34 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2838050 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.820 { 00:24:22.820 "params": { 00:24:22.820 "name": "Nvme$subsystem", 00:24:22.820 "trtype": "$TEST_TRANSPORT", 00:24:22.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.820 "adrfam": "ipv4", 00:24:22.820 "trsvcid": "$NVMF_PORT", 00:24:22.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.820 "hdgst": ${hdgst:-false}, 00:24:22.820 "ddgst": ${ddgst:-false} 00:24:22.820 }, 00:24:22.820 "method": "bdev_nvme_attach_controller" 00:24:22.820 } 00:24:22.820 EOF 00:24:22.820 )") 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:22.820 12:38:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:22.820 "params": { 00:24:22.820 "name": "Nvme1", 00:24:22.820 "trtype": "rdma", 00:24:22.820 "traddr": "192.168.100.8", 00:24:22.820 "adrfam": "ipv4", 00:24:22.820 "trsvcid": "4420", 00:24:22.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.820 "hdgst": false, 00:24:22.820 "ddgst": false 00:24:22.820 }, 00:24:22.820 "method": "bdev_nvme_attach_controller" 00:24:22.820 }' 00:24:22.820 [2024-11-20 12:38:28.422901] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:22.820 [2024-11-20 12:38:28.423004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838050 ] 00:24:22.820 [2024-11-20 12:38:28.496259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.820 [2024-11-20 12:38:28.560523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.078 Running I/O for 15 seconds... 00:24:25.388 11461.00 IOPS, 44.77 MiB/s [2024-11-20T11:38:31.412Z] 11570.00 IOPS, 45.20 MiB/s [2024-11-20T11:38:31.412Z] 12:38:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2837832 00:24:25.646 12:38:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:26.779 10061.00 IOPS, 39.30 MiB/s [2024-11-20T11:38:32.545Z] [2024-11-20 12:38:32.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.779 [2024-11-20 12:38:32.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183900 00:24:26.779 [2024-11-20 12:38:32.406830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.406848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.406869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.406887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.406903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.406920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.406941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.406958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.406991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183900 00:24:26.780 [2024-11-20 12:38:32.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.407968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.408017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.408035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.408051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.408068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.780 [2024-11-20 12:38:32.408084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.780 [2024-11-20 12:38:32.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.408981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.781 [2024-11-20 12:38:32.409400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.781 [2024-11-20 12:38:32.409417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.409969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.409989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.782 [2024-11-20 12:38:32.410815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.782 [2024-11-20 12:38:32.410832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.783 [2024-11-20 12:38:32.410848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efc9a000 sqhd:8250 p:0 m:0 dnr:0 00:24:26.783 [2024-11-20 12:38:32.413129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.783 [2024-11-20 12:38:32.413161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.783 [2024-11-20 12:38:32.413177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111408 len:8 PRP1 0x0 PRP2 0x0 00:24:26.783 [2024-11-20 12:38:32.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.783 [2024-11-20 12:38:32.417258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:26.783 [2024-11-20 12:38:32.443070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.783 [2024-11-20 12:38:32.446885] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.783 [2024-11-20 12:38:32.446915] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.783 [2024-11-20 12:38:32.446930] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:24:27.976 7545.75 IOPS, 29.48 MiB/s [2024-11-20T11:38:33.742Z] [2024-11-20 12:38:33.451361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:27.976 [2024-11-20 12:38:33.451421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:27.976 [2024-11-20 12:38:33.451695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:27.976 [2024-11-20 12:38:33.451718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:27.976 [2024-11-20 12:38:33.451734] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:27.976 [2024-11-20 12:38:33.451754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:27.976 [2024-11-20 12:38:33.456542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:27.976 [2024-11-20 12:38:33.460497] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:27.976 [2024-11-20 12:38:33.460528] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:27.976 [2024-11-20 12:38:33.460544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:24:28.801 6036.60 IOPS, 23.58 MiB/s [2024-11-20T11:38:34.567Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2837832 Killed "${NVMF_APP[@]}" "$@" 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2838541 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2838541 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2838541 ']' 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.801 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.801 [2024-11-20 12:38:34.427341] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:28.801 [2024-11-20 12:38:34.427445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.801 [2024-11-20 12:38:34.468541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:28.801 [2024-11-20 12:38:34.468592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:28.801 [2024-11-20 12:38:34.468873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:28.801 [2024-11-20 12:38:34.468896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:28.801 [2024-11-20 12:38:34.468913] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:28.801 [2024-11-20 12:38:34.468933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:28.801 [2024-11-20 12:38:34.473254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:28.801 [2024-11-20 12:38:34.476745] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:28.801 [2024-11-20 12:38:34.476776] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:28.801 [2024-11-20 12:38:34.476792] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:24:28.801 [2024-11-20 12:38:34.505099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.059 [2024-11-20 12:38:34.567459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.059 [2024-11-20 12:38:34.567521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.059 [2024-11-20 12:38:34.567545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.059 [2024-11-20 12:38:34.567559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.059 [2024-11-20 12:38:34.567570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.059 [2024-11-20 12:38:34.568846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.059 [2024-11-20 12:38:34.568898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.059 [2024-11-20 12:38:34.568903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.059 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 [2024-11-20 12:38:34.775308] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d27590/0x1d2ba80) succeed. 00:24:29.059 5030.50 IOPS, 19.65 MiB/s [2024-11-20T11:38:34.825Z] [2024-11-20 12:38:34.790255] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d28b80/0x1d6d120) succeed. 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 Malloc0 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 [2024-11-20 12:38:34.974468] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:38:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2838050 00:24:29.942 [2024-11-20 12:38:35.481204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:24:29.942 [2024-11-20 12:38:35.481244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:29.942 [2024-11-20 12:38:35.481515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:29.942 [2024-11-20 12:38:35.481538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:29.942 [2024-11-20 12:38:35.481554] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:24:29.942 [2024-11-20 12:38:35.481573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:29.942 [2024-11-20 12:38:35.486091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:29.942 [2024-11-20 12:38:35.538544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:24:31.147 4701.29 IOPS, 18.36 MiB/s [2024-11-20T11:38:37.877Z] 5565.88 IOPS, 21.74 MiB/s [2024-11-20T11:38:38.811Z] 6242.56 IOPS, 24.38 MiB/s [2024-11-20T11:38:40.199Z] 6783.10 IOPS, 26.50 MiB/s [2024-11-20T11:38:41.138Z] 7225.36 IOPS, 28.22 MiB/s [2024-11-20T11:38:42.079Z] 7589.92 IOPS, 29.65 MiB/s [2024-11-20T11:38:43.020Z] 7899.62 IOPS, 30.86 MiB/s [2024-11-20T11:38:43.963Z] 8166.21 IOPS, 31.90 MiB/s [2024-11-20T11:38:43.963Z] 8397.93 IOPS, 32.80 MiB/s 00:24:38.197 Latency(us) 00:24:38.197 [2024-11-20T11:38:43.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.197 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:38.197 Verification LBA range: start 0x0 length 0x4000 00:24:38.197 Nvme1n1 : 15.01 8395.96 32.80 6701.22 0.00 8447.09 716.04 1050129.45 00:24:38.197 [2024-11-20T11:38:43.963Z] =================================================================================================================== 00:24:38.197 [2024-11-20T11:38:43.963Z] Total : 8395.96 32.80 6701.22 0.00 8447.09 716.04 1050129.45 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:38.458 rmmod nvme_rdma 00:24:38.458 rmmod nvme_fabrics 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2838541 ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2838541 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2838541 ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2838541 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838541 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838541' 00:24:38.458 killing process with pid 2838541 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2838541 00:24:38.458 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2838541 00:24:38.717 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.717 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:38.717 00:24:38.717 real 0m20.916s 00:24:38.717 user 1m2.175s 00:24:38.717 sys 0m2.929s 00:24:38.717 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.717 12:38:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.717 ************************************ 00:24:38.717 END TEST nvmf_bdevperf 00:24:38.717 ************************************ 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.979 ************************************ 00:24:38.979 START TEST nvmf_target_disconnect 00:24:38.979 ************************************ 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:38.979 * Looking for test storage... 00:24:38.979 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.979 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.980 --rc genhtml_branch_coverage=1 00:24:38.980 --rc genhtml_function_coverage=1 00:24:38.980 --rc genhtml_legend=1 00:24:38.980 --rc geninfo_all_blocks=1 00:24:38.980 --rc geninfo_unexecuted_blocks=1 00:24:38.980 00:24:38.980 ' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.980 --rc genhtml_branch_coverage=1 00:24:38.980 --rc genhtml_function_coverage=1 00:24:38.980 --rc genhtml_legend=1 00:24:38.980 --rc geninfo_all_blocks=1 00:24:38.980 --rc geninfo_unexecuted_blocks=1 00:24:38.980 00:24:38.980 ' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.980 --rc genhtml_branch_coverage=1 00:24:38.980 --rc genhtml_function_coverage=1 00:24:38.980 --rc genhtml_legend=1 00:24:38.980 --rc geninfo_all_blocks=1 00:24:38.980 --rc geninfo_unexecuted_blocks=1 00:24:38.980 00:24:38.980 ' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.980 --rc genhtml_branch_coverage=1 00:24:38.980 --rc genhtml_function_coverage=1 00:24:38.980 --rc genhtml_legend=1 00:24:38.980 --rc geninfo_all_blocks=1 00:24:38.980 --rc geninfo_unexecuted_blocks=1 00:24:38.980 00:24:38.980 ' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.980 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.980 12:38:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:24:41.522 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:24:41.522 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:24:41.522 Found net devices under 0000:83:00.0: mlx_0_0 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:24:41.522 Found net devices under 0000:83:00.1: mlx_0_1 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:41.522 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:41.523 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:41.523 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:24:41.523 altname enp131s0f0np0 00:24:41.523 inet 192.168.100.8/24 scope global mlx_0_0 00:24:41.523 valid_lft forever preferred_lft forever 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:41.523 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:41.523 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:24:41.523 altname enp131s0f1np1 00:24:41.523 inet 192.168.100.9/24 scope global mlx_0_1 00:24:41.523 valid_lft forever preferred_lft forever 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:41.523 192.168.100.9' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:41.523 192.168.100.9' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:41.523 192.168.100.9' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:41.523 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:41.524 ************************************ 00:24:41.524 START TEST nvmf_target_disconnect_tc1 00:24:41.524 ************************************ 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:41.524 12:38:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:41.524 [2024-11-20 12:38:47.107004] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:41.524 [2024-11-20 12:38:47.107089] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:41.524 [2024-11-20 12:38:47.107106] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:24:42.466 [2024-11-20 12:38:48.111542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:24:42.466 [2024-11-20 12:38:48.111593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:24:42.466 [2024-11-20 12:38:48.111613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:24:42.466 [2024-11-20 12:38:48.111653] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:42.466 [2024-11-20 12:38:48.111672] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:24:42.466 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:42.466 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:42.466 Initializing NVMe Controllers 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.466 00:24:42.466 real 0m1.143s 00:24:42.466 user 0m0.956s 00:24:42.466 sys 0m0.166s 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:42.466 ************************************ 00:24:42.466 END TEST nvmf_target_disconnect_tc1 00:24:42.466 ************************************ 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:42.466 ************************************ 00:24:42.466 START TEST nvmf_target_disconnect_tc2 00:24:42.466 ************************************ 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2840915 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2840915 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2840915 ']' 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.466 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.725 [2024-11-20 12:38:48.231021] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:42.725 [2024-11-20 12:38:48.231127] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.725 [2024-11-20 12:38:48.304343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.725 [2024-11-20 12:38:48.367671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.725 [2024-11-20 12:38:48.367729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.725 [2024-11-20 12:38:48.367745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.725 [2024-11-20 12:38:48.367759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.725 [2024-11-20 12:38:48.367771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.725 [2024-11-20 12:38:48.369112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:42.725 [2024-11-20 12:38:48.369191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:42.725 [2024-11-20 12:38:48.369247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:42.725 [2024-11-20 12:38:48.369255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 Malloc0 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.985 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:42.985 [2024-11-20 12:38:48.642862] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2332150/0x233dcf0) succeed. 00:24:42.985 [2024-11-20 12:38:48.659045] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23337e0/0x23bdd80) succeed. 00:24:43.244 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.244 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.244 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.244 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.244 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 [2024-11-20 12:38:48.845580] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2840949 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:43.245 12:38:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:45.175 12:38:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2840915 00:24:45.175 12:38:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Write completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 Read completed with error (sct=0, sc=8) 00:24:46.556 starting I/O failed 00:24:46.556 [2024-11-20 12:38:52.052240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:47.127 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2840915 Killed "${NVMF_APP[@]}" "$@" 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2841327 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2841327 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2841327 ']' 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.127 12:38:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.386 [2024-11-20 12:38:52.920397] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:47.386 [2024-11-20 12:38:52.920495] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.386 [2024-11-20 12:38:52.997671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Write completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.386 Read completed with error (sct=0, sc=8) 00:24:47.386 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Read completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 Write completed with error (sct=0, sc=8) 00:24:47.387 starting I/O failed 00:24:47.387 [2024-11-20 12:38:53.058016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:47.387 [2024-11-20 12:38:53.060310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.387 [2024-11-20 12:38:53.060347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.387 [2024-11-20 12:38:53.060362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.387 [2024-11-20 12:38:53.060376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.387 [2024-11-20 12:38:53.060387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.387 [2024-11-20 12:38:53.061698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:47.387 [2024-11-20 12:38:53.061820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:47.387 [2024-11-20 12:38:53.061938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:47.387 [2024-11-20 12:38:53.061961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.646 Malloc0 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.646 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.646 [2024-11-20 12:38:53.314741] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e34150/0x1e3fcf0) succeed. 00:24:47.646 [2024-11-20 12:38:53.330427] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e357e0/0x1ebfd80) succeed. 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 [2024-11-20 12:38:53.518669] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.906 12:38:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2840949 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Write completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 Read completed with error (sct=0, sc=8) 00:24:48.477 starting I/O failed 00:24:48.477 [2024-11-20 12:38:54.063974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Read completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Read completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Read completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Read completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.416 starting I/O failed 00:24:49.416 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Write completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 Read completed with error (sct=0, sc=8) 00:24:49.417 starting I/O failed 00:24:49.417 [2024-11-20 12:38:55.069358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 [2024-11-20 12:38:55.081738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.417 [2024-11-20 12:38:55.081819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.417 [2024-11-20 12:38:55.081852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.417 [2024-11-20 12:38:55.081868] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.417 [2024-11-20 12:38:55.081883] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.417 [2024-11-20 12:38:55.091510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 qpair failed and we were unable to recover it. 00:24:49.417 [2024-11-20 12:38:55.101627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.417 [2024-11-20 12:38:55.101691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.417 [2024-11-20 12:38:55.101723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.417 [2024-11-20 12:38:55.101740] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.417 [2024-11-20 12:38:55.101755] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.417 [2024-11-20 12:38:55.111575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 qpair failed and we were unable to recover it. 00:24:49.417 [2024-11-20 12:38:55.121682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.417 [2024-11-20 12:38:55.121745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.417 [2024-11-20 12:38:55.121775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.417 [2024-11-20 12:38:55.121791] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.417 [2024-11-20 12:38:55.121805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.417 [2024-11-20 12:38:55.131653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 qpair failed and we were unable to recover it. 00:24:49.417 [2024-11-20 12:38:55.141572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.417 [2024-11-20 12:38:55.141640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.417 [2024-11-20 12:38:55.141670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.417 [2024-11-20 12:38:55.141687] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.417 [2024-11-20 12:38:55.141701] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.417 [2024-11-20 12:38:55.151601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 qpair failed and we were unable to recover it. 00:24:49.417 [2024-11-20 12:38:55.161720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.417 [2024-11-20 12:38:55.161789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.417 [2024-11-20 12:38:55.161819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.417 [2024-11-20 12:38:55.161835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.417 [2024-11-20 12:38:55.161850] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.417 [2024-11-20 12:38:55.171504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.417 qpair failed and we were unable to recover it. 00:24:49.678 [2024-11-20 12:38:55.181853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.678 [2024-11-20 12:38:55.181915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.678 [2024-11-20 12:38:55.181946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.678 [2024-11-20 12:38:55.181962] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.678 [2024-11-20 12:38:55.181976] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.678 [2024-11-20 12:38:55.191942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.678 qpair failed and we were unable to recover it. 00:24:49.678 [2024-11-20 12:38:55.201937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.678 [2024-11-20 12:38:55.202000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.678 [2024-11-20 12:38:55.202035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.678 [2024-11-20 12:38:55.202051] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.678 [2024-11-20 12:38:55.202064] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.678 [2024-11-20 12:38:55.211889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.678 qpair failed and we were unable to recover it. 00:24:49.678 [2024-11-20 12:38:55.221953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.678 [2024-11-20 12:38:55.222024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.678 [2024-11-20 12:38:55.222052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.678 [2024-11-20 12:38:55.222068] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.678 [2024-11-20 12:38:55.222082] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.678 [2024-11-20 12:38:55.232080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.678 qpair failed and we were unable to recover it. 00:24:49.678 [2024-11-20 12:38:55.242430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.678 [2024-11-20 12:38:55.242499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.678 [2024-11-20 12:38:55.242531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.678 [2024-11-20 12:38:55.242548] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.678 [2024-11-20 12:38:55.242562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.678 [2024-11-20 12:38:55.252128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.678 qpair failed and we were unable to recover it. 00:24:49.678 [2024-11-20 12:38:55.262370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.678 [2024-11-20 12:38:55.262439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.262470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.262494] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.262509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.272269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.282548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.282612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.282643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.282665] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.282679] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.292284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.302609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.302676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.302703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.302718] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.302731] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.312410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.322696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.322761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.322788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.322804] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.322818] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.332181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.342443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.342511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.342541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.342558] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.342571] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.352330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.362565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.362629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.362657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.362672] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.362686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.372494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.382412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.382490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.382521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.382537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.382551] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.392590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.402612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.402683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.402711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.402726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.402740] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.412614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.679 [2024-11-20 12:38:55.422741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.679 [2024-11-20 12:38:55.422807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.679 [2024-11-20 12:38:55.422837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.679 [2024-11-20 12:38:55.422853] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.679 [2024-11-20 12:38:55.422867] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.679 [2024-11-20 12:38:55.432680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.679 qpair failed and we were unable to recover it. 00:24:49.940 [2024-11-20 12:38:55.442691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.940 [2024-11-20 12:38:55.442755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.442783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.442799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.442813] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.452639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.462772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.462845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.462876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.462891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.462905] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.472857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.482827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.482901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.482929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.482945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.482958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.492734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.502947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.503012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.503039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.503054] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.503068] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.512804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.523067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.523132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.523160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.523176] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.523190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.532938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.543034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.543106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.543139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.543155] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.543168] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.553014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.563178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.563250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.563281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.563297] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.563310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.573237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.583219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.583283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.583314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.583330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.583343] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.593086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.603449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.603516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.603549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.603565] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.603578] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.613126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.623349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.623420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.623450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.623466] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.623495] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.633275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.643453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.643529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.643559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.643575] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.941 [2024-11-20 12:38:55.643589] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.941 [2024-11-20 12:38:55.653300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.941 qpair failed and we were unable to recover it. 00:24:49.941 [2024-11-20 12:38:55.663400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.941 [2024-11-20 12:38:55.663464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.941 [2024-11-20 12:38:55.663503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.941 [2024-11-20 12:38:55.663521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.942 [2024-11-20 12:38:55.663534] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.942 [2024-11-20 12:38:55.673731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.942 qpair failed and we were unable to recover it. 00:24:49.942 [2024-11-20 12:38:55.683509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:49.942 [2024-11-20 12:38:55.683571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:49.942 [2024-11-20 12:38:55.683599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:49.942 [2024-11-20 12:38:55.683614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:49.942 [2024-11-20 12:38:55.683628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.942 [2024-11-20 12:38:55.693458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.942 qpair failed and we were unable to recover it. 00:24:50.202 [2024-11-20 12:38:55.703301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.202 [2024-11-20 12:38:55.703369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.202 [2024-11-20 12:38:55.703396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.202 [2024-11-20 12:38:55.703411] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.202 [2024-11-20 12:38:55.703424] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.202 [2024-11-20 12:38:55.713450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.202 qpair failed and we were unable to recover it. 00:24:50.202 [2024-11-20 12:38:55.723526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.202 [2024-11-20 12:38:55.723593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.202 [2024-11-20 12:38:55.723619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.202 [2024-11-20 12:38:55.723635] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.202 [2024-11-20 12:38:55.723649] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.202 [2024-11-20 12:38:55.733436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.202 qpair failed and we were unable to recover it. 00:24:50.202 [2024-11-20 12:38:55.743616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.202 [2024-11-20 12:38:55.743680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.202 [2024-11-20 12:38:55.743708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.202 [2024-11-20 12:38:55.743723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.202 [2024-11-20 12:38:55.743737] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.202 [2024-11-20 12:38:55.754078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.202 qpair failed and we were unable to recover it. 00:24:50.202 [2024-11-20 12:38:55.763583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.202 [2024-11-20 12:38:55.763643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.202 [2024-11-20 12:38:55.763671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.202 [2024-11-20 12:38:55.763687] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.202 [2024-11-20 12:38:55.763701] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.202 [2024-11-20 12:38:55.773496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.202 qpair failed and we were unable to recover it. 00:24:50.202 [2024-11-20 12:38:55.783363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.202 [2024-11-20 12:38:55.783437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.202 [2024-11-20 12:38:55.783463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.783489] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.783505] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.793305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.803455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.803544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.803571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.803587] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.803601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.813492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.823518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.823586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.823613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.823628] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.823641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.833630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.843686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.843750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.843781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.843797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.843811] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.853510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.863784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.863854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.863884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.863901] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.863914] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.873937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.883756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.883825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.883859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.883875] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.883889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.894033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.903954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.904020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.904048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.904064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.904078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.913972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.924148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.924215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.924242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.924258] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.924272] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.934085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.944197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.944270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.944303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.944319] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.944333] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.203 [2024-11-20 12:38:55.954214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.203 qpair failed and we were unable to recover it. 00:24:50.203 [2024-11-20 12:38:55.964339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.203 [2024-11-20 12:38:55.964404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.203 [2024-11-20 12:38:55.964437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.203 [2024-11-20 12:38:55.964453] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.203 [2024-11-20 12:38:55.964473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:55.974128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:55.984364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:55.984425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:55.984455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:55.984471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:55.984496] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:55.994198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.004515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.004579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.004607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.004622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.004636] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.014457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.024457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.024538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.024565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.024581] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.024595] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.034213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.044530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.044604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.044634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.044650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.044664] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.054343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.064665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.064735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.064763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.064778] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.064791] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.074590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.084531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.084595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.084625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.084641] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.084654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.094702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.104803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.104872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.104900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.104915] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.464 [2024-11-20 12:38:56.104929] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.464 [2024-11-20 12:38:56.114642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.464 qpair failed and we were unable to recover it. 00:24:50.464 [2024-11-20 12:38:56.124974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.464 [2024-11-20 12:38:56.125045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.464 [2024-11-20 12:38:56.125073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.464 [2024-11-20 12:38:56.125089] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.125103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.465 [2024-11-20 12:38:56.134705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.465 qpair failed and we were unable to recover it. 00:24:50.465 [2024-11-20 12:38:56.144968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.465 [2024-11-20 12:38:56.145042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.465 [2024-11-20 12:38:56.145072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.465 [2024-11-20 12:38:56.145088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.145101] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.465 [2024-11-20 12:38:56.154866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.465 qpair failed and we were unable to recover it. 00:24:50.465 [2024-11-20 12:38:56.164967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.465 [2024-11-20 12:38:56.165031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.465 [2024-11-20 12:38:56.165060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.465 [2024-11-20 12:38:56.165076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.165089] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.465 [2024-11-20 12:38:56.174974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.465 qpair failed and we were unable to recover it. 00:24:50.465 [2024-11-20 12:38:56.184961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.465 [2024-11-20 12:38:56.185031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.465 [2024-11-20 12:38:56.185059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.465 [2024-11-20 12:38:56.185075] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.185088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.465 [2024-11-20 12:38:56.195028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.465 qpair failed and we were unable to recover it. 00:24:50.465 [2024-11-20 12:38:56.205111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.465 [2024-11-20 12:38:56.205180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.465 [2024-11-20 12:38:56.205210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.465 [2024-11-20 12:38:56.205226] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.205239] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.465 [2024-11-20 12:38:56.214766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.465 qpair failed and we were unable to recover it. 00:24:50.465 [2024-11-20 12:38:56.225124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.465 [2024-11-20 12:38:56.225192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.465 [2024-11-20 12:38:56.225228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.465 [2024-11-20 12:38:56.225246] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.465 [2024-11-20 12:38:56.225259] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.234869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.245132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.245192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.245220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.245236] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.245249] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.254983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.265030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.265102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.265131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.265146] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.265160] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.275070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.285078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.285142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.285172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.285188] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.285201] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.295125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.305291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.305357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.305387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.305403] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.305423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.315118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.325119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.325186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.325216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.325232] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.325246] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.335208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.345088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.345155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.345188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.345205] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.345218] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.355256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.365360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.365424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.365453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.365468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.365490] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.375121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.385689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.385752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.385783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.385799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.385813] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.395693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.405261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.405326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.405356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.405372] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.405385] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.415459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.425531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.425599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.425627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.425642] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.425656] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.435365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.445573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.445645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.445673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.725 [2024-11-20 12:38:56.445688] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.725 [2024-11-20 12:38:56.445701] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.725 [2024-11-20 12:38:56.455580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.725 qpair failed and we were unable to recover it. 00:24:50.725 [2024-11-20 12:38:56.465651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.725 [2024-11-20 12:38:56.465716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.725 [2024-11-20 12:38:56.465747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.726 [2024-11-20 12:38:56.465763] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.726 [2024-11-20 12:38:56.465777] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.726 [2024-11-20 12:38:56.475793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.726 qpair failed and we were unable to recover it. 00:24:50.726 [2024-11-20 12:38:56.485714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.726 [2024-11-20 12:38:56.485784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.726 [2024-11-20 12:38:56.485812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.726 [2024-11-20 12:38:56.485827] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.726 [2024-11-20 12:38:56.485841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.495622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.505767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.505837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.505867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.505883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.505897] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.515800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.525910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.525981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.526009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.526025] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.526038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.535754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.545876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.545942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.545970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.545986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.545999] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.555900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.566027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.566090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.566118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.566140] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.566154] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.576013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.586037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.586110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.586138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.586154] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.586170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.596212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.606352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.606417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.606447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.606463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.606476] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.616073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.626375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.626438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.626466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.626491] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.626506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.636155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.646444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.646514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.646545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.646561] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.646575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.656245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.666359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.666430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.666458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.666474] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.666497] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.676410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.686432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.686514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.686541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.686556] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.686571] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.696369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.706612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.706673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.706700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.706716] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.706729] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.716423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.726689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.986 [2024-11-20 12:38:56.726754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.986 [2024-11-20 12:38:56.726785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.986 [2024-11-20 12:38:56.726801] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.986 [2024-11-20 12:38:56.726815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.986 [2024-11-20 12:38:56.736537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.986 qpair failed and we were unable to recover it. 00:24:50.986 [2024-11-20 12:38:56.746631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:50.987 [2024-11-20 12:38:56.746701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:50.987 [2024-11-20 12:38:56.746731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:50.987 [2024-11-20 12:38:56.746747] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:50.987 [2024-11-20 12:38:56.746760] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.756629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.766884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.766957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.766987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.767003] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.767016] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.776604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.786725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.786789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.786820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.786835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.786849] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.796733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.806927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.806992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.807020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.807036] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.807050] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.816717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.826850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.826922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.826956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.826973] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.826986] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.836611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.847060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.847135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.847162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.847177] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.847191] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.246 [2024-11-20 12:38:56.856757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.246 qpair failed and we were unable to recover it. 00:24:51.246 [2024-11-20 12:38:56.867127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.246 [2024-11-20 12:38:56.867191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.246 [2024-11-20 12:38:56.867219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.246 [2024-11-20 12:38:56.867234] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.246 [2024-11-20 12:38:56.867247] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.876884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.887135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.887194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.887222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.887237] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.887251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.896798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.906689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.906758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.906787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.906809] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.906823] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.916959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.927137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.927204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.927233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.927248] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.927261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.936834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.947101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.947168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.947196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.947211] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.947225] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.957071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.967139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.967203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.967233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.967248] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.967262] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.977036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:56.987186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:56.987260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:56.987290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:56.987306] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:56.987320] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.247 [2024-11-20 12:38:56.997232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.247 qpair failed and we were unable to recover it. 00:24:51.247 [2024-11-20 12:38:57.007435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.247 [2024-11-20 12:38:57.007506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.247 [2024-11-20 12:38:57.007535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.247 [2024-11-20 12:38:57.007551] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.247 [2024-11-20 12:38:57.007564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.506 [2024-11-20 12:38:57.017220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.506 qpair failed and we were unable to recover it. 00:24:51.506 [2024-11-20 12:38:57.027378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.506 [2024-11-20 12:38:57.027441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.506 [2024-11-20 12:38:57.027474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.506 [2024-11-20 12:38:57.027502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.506 [2024-11-20 12:38:57.027516] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.506 [2024-11-20 12:38:57.037754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.506 qpair failed and we were unable to recover it. 00:24:51.506 [2024-11-20 12:38:57.047621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.506 [2024-11-20 12:38:57.047683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.506 [2024-11-20 12:38:57.047710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.506 [2024-11-20 12:38:57.047726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.506 [2024-11-20 12:38:57.047739] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.506 [2024-11-20 12:38:57.057411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.506 qpair failed and we were unable to recover it. 00:24:51.506 [2024-11-20 12:38:57.067780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.506 [2024-11-20 12:38:57.067850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.506 [2024-11-20 12:38:57.067876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.506 [2024-11-20 12:38:57.067892] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.506 [2024-11-20 12:38:57.067906] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.506 [2024-11-20 12:38:57.077648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.506 qpair failed and we were unable to recover it. 00:24:51.506 [2024-11-20 12:38:57.087376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.087440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.087471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.087498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.087514] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.097614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.107871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.107941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.107972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.107988] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.108001] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.117887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.127820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.127884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.127912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.127928] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.127941] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.137798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.147706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.147778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.147806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.147822] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.147835] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.157784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.167980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.168049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.168084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.168100] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.168114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.177999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.187974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.188038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.188067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.188083] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.188096] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.198023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.208054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.208116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.208146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.208162] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.208175] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.217782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.228269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.228341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.228369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.228384] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.228397] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.238194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.507 [2024-11-20 12:38:57.248408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.507 [2024-11-20 12:38:57.248473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.507 [2024-11-20 12:38:57.248522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.507 [2024-11-20 12:38:57.248544] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.507 [2024-11-20 12:38:57.248559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.507 [2024-11-20 12:38:57.258119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.507 qpair failed and we were unable to recover it. 00:24:51.766 [2024-11-20 12:38:57.269623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.766 [2024-11-20 12:38:57.269687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.766 [2024-11-20 12:38:57.269717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.766 [2024-11-20 12:38:57.269733] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.766 [2024-11-20 12:38:57.269746] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.766 [2024-11-20 12:38:57.278276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.766 qpair failed and we were unable to recover it. 00:24:51.766 [2024-11-20 12:38:57.288464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.766 [2024-11-20 12:38:57.288533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.766 [2024-11-20 12:38:57.288563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.766 [2024-11-20 12:38:57.288579] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.766 [2024-11-20 12:38:57.288593] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.766 [2024-11-20 12:38:57.298125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.766 qpair failed and we were unable to recover it. 00:24:51.766 [2024-11-20 12:38:57.308393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.766 [2024-11-20 12:38:57.308462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.766 [2024-11-20 12:38:57.308502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.766 [2024-11-20 12:38:57.308519] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.766 [2024-11-20 12:38:57.308533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.766 [2024-11-20 12:38:57.318187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.766 qpair failed and we were unable to recover it. 00:24:51.766 [2024-11-20 12:38:57.328602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.328667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.328694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.328709] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.328722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.338256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.348638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.348701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.348728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.348744] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.348757] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.358496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.368710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.368774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.368804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.368819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.368833] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.378508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.388658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.388730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.388761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.388777] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.388791] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.398649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.408727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.408792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.408820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.408835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.408849] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.418635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.428959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.767 [2024-11-20 12:38:57.429021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.767 [2024-11-20 12:38:57.429049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.767 [2024-11-20 12:38:57.429065] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.767 [2024-11-20 12:38:57.429078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.767 [2024-11-20 12:38:57.438803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.767 qpair failed and we were unable to recover it. 00:24:51.767 [2024-11-20 12:38:57.449057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.768 [2024-11-20 12:38:57.449123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.768 [2024-11-20 12:38:57.449152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.768 [2024-11-20 12:38:57.449168] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.768 [2024-11-20 12:38:57.449181] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.768 [2024-11-20 12:38:57.458834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.768 qpair failed and we were unable to recover it. 00:24:51.768 [2024-11-20 12:38:57.468786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.768 [2024-11-20 12:38:57.468856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.768 [2024-11-20 12:38:57.468884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.768 [2024-11-20 12:38:57.468900] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.768 [2024-11-20 12:38:57.468913] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.768 [2024-11-20 12:38:57.479036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.768 qpair failed and we were unable to recover it. 00:24:51.768 [2024-11-20 12:38:57.489114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.768 [2024-11-20 12:38:57.489191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.768 [2024-11-20 12:38:57.489221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.768 [2024-11-20 12:38:57.489237] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.768 [2024-11-20 12:38:57.489251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.768 [2024-11-20 12:38:57.499064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.768 qpair failed and we were unable to recover it. 00:24:51.768 [2024-11-20 12:38:57.509110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:51.768 [2024-11-20 12:38:57.509178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:51.768 [2024-11-20 12:38:57.509217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:51.768 [2024-11-20 12:38:57.509233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:51.768 [2024-11-20 12:38:57.509247] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:51.768 [2024-11-20 12:38:57.519040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:51.768 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.529337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.529398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.529430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.529446] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.529459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.539090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.549317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.549389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.549419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.549435] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.549449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.559069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.569349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.569423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.569455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.569472] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.569502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.578919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.589236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.589303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.589332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.589347] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.589367] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.598977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.609067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.609135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.609164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.609179] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.609193] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.618866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.629135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.629208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.629236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.629251] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.629264] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.639145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.027 [2024-11-20 12:38:57.649277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.027 [2024-11-20 12:38:57.649340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.027 [2024-11-20 12:38:57.649372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.027 [2024-11-20 12:38:57.649388] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.027 [2024-11-20 12:38:57.649402] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.027 [2024-11-20 12:38:57.658957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.027 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.669220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.669280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.669307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.669323] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.669336] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.679571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.689192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.689255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.689285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.689300] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.689314] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.699242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.709267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.709338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.709370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.709386] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.709400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.719257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.729355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.729420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.729451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.729467] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.729488] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.739352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.749335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.749402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.749430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.749445] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.749459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.759462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.769451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.769527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.769554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.769570] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.769584] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.028 [2024-11-20 12:38:57.779413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.028 qpair failed and we were unable to recover it. 00:24:52.028 [2024-11-20 12:38:57.789569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.028 [2024-11-20 12:38:57.789636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.028 [2024-11-20 12:38:57.789663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.028 [2024-11-20 12:38:57.789679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.028 [2024-11-20 12:38:57.789692] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.288 [2024-11-20 12:38:57.799618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.288 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.809541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.809610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.809641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.809657] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.809671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.819399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.829618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.829684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.829711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.829727] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.829740] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.839557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.849643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.849706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.849741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.849757] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.849771] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.859637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.869679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.869750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.869777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.869793] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.869806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.879749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.889723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.889797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.889826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.889841] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.889855] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.899618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.909776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.909836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.909866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.909882] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.909896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.920050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.930255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.930318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.930351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.930367] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.930388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.940009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.950013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.950086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.950114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.950130] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.950144] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.960017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.970234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.970306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.970336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.970352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.970365] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:57.980194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:57.990236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:57.990301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:57.990329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:57.990344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:57.990358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:58.000397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:58.010446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:58.010515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:58.010542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:58.010558] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:58.010571] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:58.020186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:58.030319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:58.030388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:58.030418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:58.030433] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.289 [2024-11-20 12:38:58.030448] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.289 [2024-11-20 12:38:58.040203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.289 qpair failed and we were unable to recover it. 00:24:52.289 [2024-11-20 12:38:58.050244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.289 [2024-11-20 12:38:58.050310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.289 [2024-11-20 12:38:58.050337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.289 [2024-11-20 12:38:58.050353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.290 [2024-11-20 12:38:58.050366] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.060275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.070277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.070341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.070369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.070384] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.070399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.080441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.090302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.090370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.090397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.090413] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.090426] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.100311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.110447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.110529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.110557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.110573] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.110586] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.120491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.130676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.130750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.130780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.130796] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.130810] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.140762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.150799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.150866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.150894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.150909] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.150923] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.160572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.170963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.171029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.171060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.171076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.171090] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.549 [2024-11-20 12:38:58.180953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.549 qpair failed and we were unable to recover it. 00:24:52.549 [2024-11-20 12:38:58.190955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.549 [2024-11-20 12:38:58.191025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.549 [2024-11-20 12:38:58.191053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.549 [2024-11-20 12:38:58.191076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.549 [2024-11-20 12:38:58.191090] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.200833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.211014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.211089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.211117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.211133] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.211146] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.220883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.231132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.231191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.231219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.231235] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.231249] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.240847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.251140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.251199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.251227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.251248] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.251262] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.260956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.271138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.271208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.271236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.271252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.271271] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.281103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.291365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.291437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.291470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.291496] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.291511] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.550 [2024-11-20 12:38:58.301348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.550 qpair failed and we were unable to recover it. 00:24:52.550 [2024-11-20 12:38:58.311553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.550 [2024-11-20 12:38:58.311616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.550 [2024-11-20 12:38:58.311646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.550 [2024-11-20 12:38:58.311662] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.550 [2024-11-20 12:38:58.311676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.321649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.331470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.331546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.331576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.331593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.331606] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.341195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.351496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.351569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.351600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.351616] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.351630] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.361244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.371547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.371617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.371648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.371664] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.371677] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.381319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.391699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.391767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.391794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.391810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.391823] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.401475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.411679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.411743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.411773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.411788] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.411802] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.421456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.431691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.431759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.431789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.431805] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.431819] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.441506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.451835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.451905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.451942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.451958] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.451972] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.461642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.471844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.471910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.471941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.471957] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.471970] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.481876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.810 [2024-11-20 12:38:58.491861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.810 [2024-11-20 12:38:58.491926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.810 [2024-11-20 12:38:58.491954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.810 [2024-11-20 12:38:58.491969] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.810 [2024-11-20 12:38:58.491983] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.810 [2024-11-20 12:38:58.501764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.810 qpair failed and we were unable to recover it. 00:24:52.811 [2024-11-20 12:38:58.512044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.811 [2024-11-20 12:38:58.512115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.811 [2024-11-20 12:38:58.512143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.811 [2024-11-20 12:38:58.512159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.811 [2024-11-20 12:38:58.512173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.811 [2024-11-20 12:38:58.521920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.811 qpair failed and we were unable to recover it. 00:24:52.811 [2024-11-20 12:38:58.532042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.811 [2024-11-20 12:38:58.532113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.811 [2024-11-20 12:38:58.532141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.811 [2024-11-20 12:38:58.532164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.811 [2024-11-20 12:38:58.532179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.811 [2024-11-20 12:38:58.541911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.811 qpair failed and we were unable to recover it. 00:24:52.811 [2024-11-20 12:38:58.552072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.811 [2024-11-20 12:38:58.552135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.811 [2024-11-20 12:38:58.552164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.811 [2024-11-20 12:38:58.552179] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.811 [2024-11-20 12:38:58.552192] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:52.811 [2024-11-20 12:38:58.562092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.811 qpair failed and we were unable to recover it. 00:24:52.811 [2024-11-20 12:38:58.572249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:52.811 [2024-11-20 12:38:58.572308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:52.811 [2024-11-20 12:38:58.572341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:52.811 [2024-11-20 12:38:58.572357] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:52.811 [2024-11-20 12:38:58.572370] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.069 [2024-11-20 12:38:58.581922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.069 qpair failed and we were unable to recover it. 00:24:53.069 [2024-11-20 12:38:58.592148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.069 [2024-11-20 12:38:58.592218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.069 [2024-11-20 12:38:58.592247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.069 [2024-11-20 12:38:58.592263] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.069 [2024-11-20 12:38:58.592278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.069 [2024-11-20 12:38:58.602243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.612294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.612363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.612391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.612406] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.612420] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.622292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.632354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.632424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.632451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.632467] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.632490] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.642295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.652485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.652553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.652580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.652596] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.652609] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.662352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.672486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.672551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.672578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.672593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.672606] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.682551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.692423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.692505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.692532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.692548] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.692561] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.702168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.712648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.712712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.712742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.712759] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.712772] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.722355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.732691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.732758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.732785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.732801] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.732814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.742477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.752511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.752581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.752612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.752628] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.752641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.762661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.772732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.772807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.772835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.772850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.772864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.782652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.792813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.792880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.792916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.792932] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.792946] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.802781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.070 [2024-11-20 12:38:58.812891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.070 [2024-11-20 12:38:58.812950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.070 [2024-11-20 12:38:58.812978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.070 [2024-11-20 12:38:58.812994] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.070 [2024-11-20 12:38:58.813008] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.070 [2024-11-20 12:38:58.822714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.070 qpair failed and we were unable to recover it. 00:24:53.329 [2024-11-20 12:38:58.832748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.329 [2024-11-20 12:38:58.832818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.832846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.832861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.832875] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.842745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.852987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.853056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.853084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.853100] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.853113] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.862845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.873151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.873216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.873244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.873266] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.873280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.882947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.893264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.893329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.893357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.893372] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.893386] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.903036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.913169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.913241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.913271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.913287] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.913301] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.923155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.933117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.933184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.933214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.933230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.933244] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.942840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.953117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.953186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.953214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.953230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.953243] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.963464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.972937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.973003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.973034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.973050] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.973063] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:58.983041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:58.993067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:58.993139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:58.993169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:58.993184] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:58.993199] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:59.003020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:59.013200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:59.013268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:59.013296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:59.013312] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:59.013325] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:59.023213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.330 qpair failed and we were unable to recover it. 00:24:53.330 [2024-11-20 12:38:59.033353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.330 [2024-11-20 12:38:59.033419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.330 [2024-11-20 12:38:59.033449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.330 [2024-11-20 12:38:59.033465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.330 [2024-11-20 12:38:59.033487] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.330 [2024-11-20 12:38:59.043260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.331 qpair failed and we were unable to recover it. 00:24:53.331 [2024-11-20 12:38:59.053358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.331 [2024-11-20 12:38:59.053418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.331 [2024-11-20 12:38:59.053447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.331 [2024-11-20 12:38:59.053463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.331 [2024-11-20 12:38:59.053476] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.331 [2024-11-20 12:38:59.063305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.331 qpair failed and we were unable to recover it. 00:24:53.331 [2024-11-20 12:38:59.073332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.331 [2024-11-20 12:38:59.073399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.331 [2024-11-20 12:38:59.073429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.331 [2024-11-20 12:38:59.073445] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.331 [2024-11-20 12:38:59.073458] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.331 [2024-11-20 12:38:59.083510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.331 qpair failed and we were unable to recover it. 00:24:53.589 [2024-11-20 12:38:59.093315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.589 [2024-11-20 12:38:59.093378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.589 [2024-11-20 12:38:59.093409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.589 [2024-11-20 12:38:59.093424] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.589 [2024-11-20 12:38:59.093438] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.589 [2024-11-20 12:38:59.103400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.589 qpair failed and we were unable to recover it. 00:24:53.589 [2024-11-20 12:38:59.113551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.589 [2024-11-20 12:38:59.113615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.589 [2024-11-20 12:38:59.113643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.589 [2024-11-20 12:38:59.113659] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.589 [2024-11-20 12:38:59.113673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:53.589 [2024-11-20 12:38:59.123554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:53.589 qpair failed and we were unable to recover it. 00:24:53.589 [2024-11-20 12:38:59.133650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.589 [2024-11-20 12:38:59.133718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.589 [2024-11-20 12:38:59.133757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.589 [2024-11-20 12:38:59.133774] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.589 [2024-11-20 12:38:59.133788] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:53.589 [2024-11-20 12:38:59.143373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:53.589 qpair failed and we were unable to recover it. 00:24:53.589 [2024-11-20 12:38:59.153599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:53.589 [2024-11-20 12:38:59.153667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:53.589 [2024-11-20 12:38:59.153695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:53.589 [2024-11-20 12:38:59.153711] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:53.589 [2024-11-20 12:38:59.153724] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:53.589 [2024-11-20 12:38:59.163608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:53.589 qpair failed and we were unable to recover it. 00:24:53.589 [2024-11-20 12:38:59.163722] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:24:53.589 A controller has encountered a failure and is being reset. 00:24:53.589 [2024-11-20 12:38:59.163838] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:53.589 [2024-11-20 12:38:59.184972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:24:53.589 Controller properly reset. 00:24:53.589 Initializing NVMe Controllers 00:24:53.589 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.589 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:53.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:53.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:53.589 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:53.589 Initialization complete. Launching workers. 00:24:53.589 Starting thread on core 1 00:24:53.589 Starting thread on core 2 00:24:53.589 Starting thread on core 3 00:24:53.589 Starting thread on core 0 00:24:53.589 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:53.589 00:24:53.589 real 0m11.105s 00:24:53.590 user 0m24.293s 00:24:53.590 sys 0m2.188s 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:53.590 ************************************ 00:24:53.590 END TEST nvmf_target_disconnect_tc2 00:24:53.590 ************************************ 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:53.590 ************************************ 00:24:53.590 START TEST nvmf_target_disconnect_tc3 00:24:53.590 ************************************ 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2841936 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:53.590 12:38:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:56.160 12:39:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2841327 00:24:56.160 12:39:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Write completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.730 Read completed with error (sct=0, sc=8) 00:24:56.730 starting I/O failed 00:24:56.991 [2024-11-20 12:39:02.495804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.562 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2841327 Killed "${NVMF_APP[@]}" "$@" 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2842351 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2842351 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2842351 ']' 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.562 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.821 [2024-11-20 12:39:03.378241] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:57.822 [2024-11-20 12:39:03.378348] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.822 [2024-11-20 12:39:03.453355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Write completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 Read completed with error (sct=0, sc=8) 00:24:57.822 starting I/O failed 00:24:57.822 [2024-11-20 12:39:03.501406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:57.822 [2024-11-20 12:39:03.503562] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:57.822 [2024-11-20 12:39:03.503601] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:57.822 [2024-11-20 12:39:03.503618] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:57.822 [2024-11-20 12:39:03.517849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.822 [2024-11-20 12:39:03.517906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.822 [2024-11-20 12:39:03.517921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.822 [2024-11-20 12:39:03.517935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.822 [2024-11-20 12:39:03.517946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.822 [2024-11-20 12:39:03.519238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:57.822 [2024-11-20 12:39:03.519290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:57.822 [2024-11-20 12:39:03.519343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:57.822 [2024-11-20 12:39:03.519347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.082 Malloc0 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.082 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.082 [2024-11-20 12:39:03.748329] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18f7150/0x1902cf0) succeed. 00:24:58.082 [2024-11-20 12:39:03.764033] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18f87e0/0x1982d80) succeed. 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.341 [2024-11-20 12:39:03.950896] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.341 12:39:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2841936 00:24:58.909 [2024-11-20 12:39:04.508004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:58.909 qpair failed and we were unable to recover it. 00:24:58.909 [2024-11-20 12:39:04.509990] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:58.909 [2024-11-20 12:39:04.510019] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:58.909 [2024-11-20 12:39:04.510034] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:59.848 [2024-11-20 12:39:05.513988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:24:59.848 qpair failed and we were unable to recover it. 00:24:59.848 [2024-11-20 12:39:05.515853] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:59.848 [2024-11-20 12:39:05.515881] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:59.848 [2024-11-20 12:39:05.515895] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:00.783 [2024-11-20 12:39:06.520069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:00.783 qpair failed and we were unable to recover it. 00:25:00.783 [2024-11-20 12:39:06.522110] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:00.783 [2024-11-20 12:39:06.522142] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:00.783 [2024-11-20 12:39:06.522156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:02.158 [2024-11-20 12:39:07.526406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.158 qpair failed and we were unable to recover it. 00:25:02.158 [2024-11-20 12:39:07.528636] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:02.158 [2024-11-20 12:39:07.528667] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:02.158 [2024-11-20 12:39:07.528682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:02.835 [2024-11-20 12:39:08.532892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:02.835 qpair failed and we were unable to recover it. 00:25:02.835 [2024-11-20 12:39:08.534806] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:02.835 [2024-11-20 12:39:08.534834] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:02.835 [2024-11-20 12:39:08.534848] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:04.209 [2024-11-20 12:39:09.539120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-11-20 12:39:09.541370] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:04.209 [2024-11-20 12:39:09.541400] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:04.209 [2024-11-20 12:39:09.541415] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:05.148 [2024-11-20 12:39:10.545699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.148 qpair failed and we were unable to recover it. 00:25:06.080 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Write completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 Read completed with error (sct=0, sc=8) 00:25:06.081 starting I/O failed 00:25:06.081 [2024-11-20 12:39:11.551650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Write completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 Read completed with error (sct=0, sc=8) 00:25:07.015 starting I/O failed 00:25:07.015 [2024-11-20 12:39:12.556997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.015 [2024-11-20 12:39:12.557053] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:25:07.015 A controller has encountered a failure and is being reset. 00:25:07.015 Resorting to new failover address 192.168.100.9 00:25:07.015 [2024-11-20 12:39:12.557141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:07.015 [2024-11-20 12:39:12.557194] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:07.015 [2024-11-20 12:39:12.577823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:25:07.015 Controller properly reset. 00:25:07.015 Initializing NVMe Controllers 00:25:07.015 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.015 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.015 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:07.015 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:07.015 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:07.015 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:07.015 Initialization complete. Launching workers. 00:25:07.015 Starting thread on core 1 00:25:07.015 Starting thread on core 2 00:25:07.015 Starting thread on core 3 00:25:07.015 Starting thread on core 0 00:25:07.015 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:25:07.015 00:25:07.015 real 0m13.367s 00:25:07.015 user 0m48.392s 00:25:07.015 sys 0m3.385s 00:25:07.015 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.015 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.016 ************************************ 00:25:07.016 END TEST nvmf_target_disconnect_tc3 00:25:07.016 ************************************ 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:07.016 rmmod nvme_rdma 00:25:07.016 rmmod nvme_fabrics 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2842351 ']' 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2842351 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2842351 ']' 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2842351 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.016 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842351 00:25:07.274 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:25:07.274 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:25:07.274 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842351' 00:25:07.274 killing process with pid 2842351 00:25:07.274 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2842351 00:25:07.274 12:39:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2842351 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:07.533 00:25:07.533 real 0m28.603s 00:25:07.533 user 1m52.160s 00:25:07.533 sys 0m7.857s 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:07.533 ************************************ 00:25:07.533 END TEST nvmf_target_disconnect 00:25:07.533 ************************************ 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:07.533 00:25:07.533 real 4m50.489s 00:25:07.533 user 13m13.371s 00:25:07.533 sys 0m48.673s 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.533 12:39:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.533 ************************************ 00:25:07.533 END TEST nvmf_host 00:25:07.533 ************************************ 00:25:07.533 12:39:13 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:25:07.533 00:25:07.533 real 16m52.663s 00:25:07.533 user 46m59.195s 00:25:07.533 sys 2m58.793s 00:25:07.533 12:39:13 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.533 12:39:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:07.533 ************************************ 00:25:07.533 END TEST nvmf_rdma 00:25:07.533 ************************************ 00:25:07.533 12:39:13 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:07.533 12:39:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.533 12:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.533 12:39:13 -- common/autotest_common.sh@10 -- # set +x 00:25:07.533 ************************************ 00:25:07.533 START TEST spdkcli_nvmf_rdma 00:25:07.533 ************************************ 00:25:07.533 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:07.533 * Looking for test storage... 00:25:07.533 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:25:07.533 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.533 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.533 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.792 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.792 --rc genhtml_branch_coverage=1 00:25:07.792 --rc genhtml_function_coverage=1 00:25:07.793 --rc genhtml_legend=1 00:25:07.793 --rc geninfo_all_blocks=1 00:25:07.793 --rc geninfo_unexecuted_blocks=1 00:25:07.793 00:25:07.793 ' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.793 --rc genhtml_branch_coverage=1 00:25:07.793 --rc genhtml_function_coverage=1 00:25:07.793 --rc genhtml_legend=1 00:25:07.793 --rc geninfo_all_blocks=1 00:25:07.793 --rc geninfo_unexecuted_blocks=1 00:25:07.793 00:25:07.793 ' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.793 --rc genhtml_branch_coverage=1 00:25:07.793 --rc genhtml_function_coverage=1 00:25:07.793 --rc genhtml_legend=1 00:25:07.793 --rc geninfo_all_blocks=1 00:25:07.793 --rc geninfo_unexecuted_blocks=1 00:25:07.793 00:25:07.793 ' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.793 --rc genhtml_branch_coverage=1 00:25:07.793 --rc genhtml_function_coverage=1 00:25:07.793 --rc genhtml_legend=1 00:25:07.793 --rc geninfo_all_blocks=1 00:25:07.793 --rc geninfo_unexecuted_blocks=1 00:25:07.793 00:25:07.793 ' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f19ece52-b769-e111-bd1d-001e673d80ae 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=f19ece52-b769-e111-bd1d-001e673d80ae 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.793 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2843662 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2843662 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 2843662 ']' 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.793 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:07.793 [2024-11-20 12:39:13.433281] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:07.793 [2024-11-20 12:39:13.433373] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843662 ] 00:25:07.793 [2024-11-20 12:39:13.503616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:08.052 [2024-11-20 12:39:13.567630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.052 [2024-11-20 12:39:13.567695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.052 12:39:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.0 (0x15b3 - 0x1015)' 00:25:09.955 Found 0000:83:00.0 (0x15b3 - 0x1015) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:83:00.1 (0x15b3 - 0x1015)' 00:25:09.955 Found 0000:83:00.1 (0x15b3 - 0x1015) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.0: mlx_0_0' 00:25:09.955 Found net devices under 0000:83:00.0: mlx_0_0 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:83:00.1: mlx_0_1' 00:25:09.955 Found net devices under 0000:83:00.1: mlx_0_1 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:09.955 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:09.956 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:09.956 link/ether 24:8a:07:4b:f4:a0 brd ff:ff:ff:ff:ff:ff 00:25:09.956 altname enp131s0f0np0 00:25:09.956 inet 192.168.100.8/24 scope global mlx_0_0 00:25:09.956 valid_lft forever preferred_lft forever 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:09.956 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:09.956 link/ether 24:8a:07:4b:f4:a1 brd ff:ff:ff:ff:ff:ff 00:25:09.956 altname enp131s0f1np1 00:25:09.956 inet 192.168.100.9/24 scope global mlx_0_1 00:25:09.956 valid_lft forever preferred_lft forever 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:09.956 192.168.100.9' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:09.956 192.168.100.9' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:09.956 192.168.100.9' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:09.956 12:39:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:09.956 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:09.956 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:09.956 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:09.956 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:09.956 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:09.956 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:09.956 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:09.956 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:09.956 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:09.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:25:09.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:09.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:09.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:09.957 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:09.957 ' 00:25:12.487 [2024-11-20 12:39:18.204576] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbcecf0/0xbdc490) succeed. 00:25:12.487 [2024-11-20 12:39:18.219658] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbd03d0/0xc5c500) succeed. 00:25:13.861 [2024-11-20 12:39:19.599849] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:25:16.393 [2024-11-20 12:39:21.987662] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:25:18.292 [2024-11-20 12:39:24.050723] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:20.191 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:20.191 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:20.191 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:20.191 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:20.191 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:20.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:20.191 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:20.191 12:39:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:20.758 12:39:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:20.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:20.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:20.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:20.758 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:20.758 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:20.758 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:20.758 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:20.758 ' 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:26.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:26.022 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:26.022 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:26.022 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2843662 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 2843662 ']' 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 2843662 00:25:26.280 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2843662 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2843662' 00:25:26.281 killing process with pid 2843662 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 2843662 00:25:26.281 12:39:31 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 2843662 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:26.539 rmmod nvme_rdma 00:25:26.539 rmmod nvme_fabrics 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:26.539 00:25:26.539 real 0m19.033s 00:25:26.539 user 0m41.169s 00:25:26.539 sys 0m2.012s 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.539 12:39:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:26.539 ************************************ 00:25:26.539 END TEST spdkcli_nvmf_rdma 00:25:26.539 ************************************ 00:25:26.539 12:39:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:26.539 12:39:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:26.539 12:39:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:26.539 12:39:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:26.539 12:39:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:26.539 12:39:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:26.539 12:39:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:26.539 12:39:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.539 12:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:26.540 12:39:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:26.540 12:39:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:26.540 12:39:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:26.540 12:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:28.443 INFO: APP EXITING 00:25:28.443 INFO: killing all VMs 00:25:28.443 INFO: killing vhost app 00:25:28.443 INFO: EXIT DONE 00:25:29.382 Waiting for block devices as requested 00:25:29.642 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:25:29.642 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:29.642 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:29.901 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:29.901 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:29.901 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:29.901 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:30.162 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:30.162 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:30.162 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:30.162 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:30.420 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:30.420 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:30.420 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:30.678 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:30.678 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:30.678 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:32.057 Cleaning 00:25:32.057 Removing: /var/run/dpdk/spdk0/config 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:32.057 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:32.058 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:32.058 Removing: /var/run/dpdk/spdk1/config 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:32.058 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:32.058 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:32.058 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:32.058 Removing: /var/run/dpdk/spdk2/config 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:32.058 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:32.058 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:32.058 Removing: /var/run/dpdk/spdk3/config 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:32.058 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:32.058 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:32.058 Removing: /var/run/dpdk/spdk4/config 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:32.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:32.318 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:32.318 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:32.318 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:32.318 Removing: /dev/shm/bdevperf_trace.pid2711155 00:25:32.318 Removing: /dev/shm/bdev_svc_trace.1 00:25:32.318 Removing: /dev/shm/nvmf_trace.0 00:25:32.318 Removing: /dev/shm/spdk_tgt_trace.pid2688692 00:25:32.318 Removing: /var/run/dpdk/spdk0 00:25:32.318 Removing: /var/run/dpdk/spdk1 00:25:32.318 Removing: /var/run/dpdk/spdk2 00:25:32.318 Removing: /var/run/dpdk/spdk3 00:25:32.318 Removing: /var/run/dpdk/spdk4 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2687502 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2688053 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2688692 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2689047 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2689545 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2689650 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2690292 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2690358 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2690765 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2693168 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2693873 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694113 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694274 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694443 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694600 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694782 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2694920 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2695073 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2695368 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2697410 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2697576 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2697710 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2697713 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698018 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698033 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698261 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698352 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698493 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698580 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698705 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2698716 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2699105 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2699220 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2699385 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2701247 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2703138 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2708644 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2709037 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2711155 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2711360 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2713215 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2716337 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2718197 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2722959 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2734006 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2735576 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2755459 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2757759 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2760659 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2764703 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2796387 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2796921 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2797575 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2798309 00:25:32.318 Removing: /var/run/dpdk/spdk_pid2800309 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2802263 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2805193 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2805670 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2806156 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2806630 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2806828 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2808797 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2808801 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2810910 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2811197 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2811488 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2811869 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2811885 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2814065 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2814298 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2816300 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2818089 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2821000 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2826880 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2826886 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2837948 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2838050 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2840797 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2840949 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2841936 00:25:32.577 Removing: /var/run/dpdk/spdk_pid2843662 00:25:32.577 Clean 00:25:32.577 12:39:38 -- common/autotest_common.sh@1453 -- # return 0 00:25:32.577 12:39:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:32.577 12:39:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.577 12:39:38 -- common/autotest_common.sh@10 -- # set +x 00:25:32.577 12:39:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:32.577 12:39:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.577 12:39:38 -- common/autotest_common.sh@10 -- # set +x 00:25:32.577 12:39:38 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:32.577 12:39:38 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:32.577 12:39:38 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:32.577 12:39:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:32.577 12:39:38 -- spdk/autotest.sh@398 -- # hostname 00:25:32.577 12:39:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-01 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:32.837 geninfo: WARNING: invalid characters removed from testname! 00:26:40.553 12:40:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:45.939 12:40:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:49.236 12:40:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:52.531 12:40:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:56.732 12:41:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:00.029 12:41:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:04.232 12:41:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:04.232 12:41:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:04.232 12:41:09 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:27:04.232 12:41:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:04.232 12:41:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:04.232 12:41:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:04.232 + [[ -n 2619048 ]] 00:27:04.232 + sudo kill 2619048 00:27:04.241 [Pipeline] } 00:27:04.257 [Pipeline] // stage 00:27:04.263 [Pipeline] } 00:27:04.278 [Pipeline] // timeout 00:27:04.283 [Pipeline] } 00:27:04.297 [Pipeline] // catchError 00:27:04.302 [Pipeline] } 00:27:04.316 [Pipeline] // wrap 00:27:04.321 [Pipeline] } 00:27:04.333 [Pipeline] // catchError 00:27:04.341 [Pipeline] stage 00:27:04.344 [Pipeline] { (Epilogue) 00:27:04.356 [Pipeline] catchError 00:27:04.357 [Pipeline] { 00:27:04.370 [Pipeline] echo 00:27:04.371 Cleanup processes 00:27:04.377 [Pipeline] sh 00:27:04.662 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:04.662 2849182 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:04.677 [Pipeline] sh 00:27:04.966 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:04.966 ++ grep -v 'sudo pgrep' 00:27:04.966 ++ awk '{print $1}' 00:27:04.966 + sudo kill -9 00:27:04.966 + true 00:27:04.980 [Pipeline] sh 00:27:05.268 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:17.503 [Pipeline] sh 00:27:17.789 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:18.049 Artifacts sizes are good 00:27:18.065 [Pipeline] archiveArtifacts 00:27:18.072 Archiving artifacts 00:27:18.247 [Pipeline] sh 00:27:18.585 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:27:18.608 [Pipeline] cleanWs 00:27:18.618 [WS-CLEANUP] Deleting project workspace... 00:27:18.618 [WS-CLEANUP] Deferred wipeout is used... 00:27:18.625 [WS-CLEANUP] done 00:27:18.627 [Pipeline] } 00:27:18.640 [Pipeline] // catchError 00:27:18.649 [Pipeline] sh 00:27:18.930 + logger -p user.info -t JENKINS-CI 00:27:18.940 [Pipeline] } 00:27:18.955 [Pipeline] // stage 00:27:18.960 [Pipeline] } 00:27:18.978 [Pipeline] // node 00:27:18.984 [Pipeline] End of Pipeline 00:27:19.023 Finished: SUCCESS